I am setting up a kubernetes lab using one node only and learning to setup kubernetes nfs.
I am following kubernetes nfs example step by step from the following link:
https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs
Trying the first section, NFS server part, executed 3 commands:
$ kubectl create -f examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml
$ kubectl create -f examples/volumes/nfs/nfs-server-rc.yaml
$ kubectl create -f examples/volumes/nfs/nfs-server-service.yaml
I experience problem, where I see the following event:
PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"
Research done:
https://github.com/kubernetes/kubernetes/issues/43120
https://github.com/kubernetes/examples/pull/30
None of those links above help me to resolve issue I experience.
I have made sure it is using image 0.8.
Image: gcr.io/google_containers/volume-nfs:0.8
Does anyone know what does this message mean?
Clue and guidance on how to troubleshoot this issue is very much appreciated.
Thank you.
$ docker version
Client:
Version: 17.09.0-ce
API version: 1.32
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:41:23 2017
OS/Arch: linux/amd64
Server:
Version: 17.09.0-ce
API version: 1.32 (minimum version 1.12)
Go version: go1.8.3
Git commit: afdb6d4
Built: Tue Sep 26 22:42:49 2017
OS/Arch: linux/amd64
Experimental: false
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:27:48Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
lab-kube-06 Ready master 2m v1.8.3
$ kubectl describe nodes lab-kube-06
Name: lab-kube-06
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=lab-kube-06
node-role.kubernetes.io/master=
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints: <none>
CreationTimestamp: Thu, 16 Nov 2017 16:51:28 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 16 Nov 2017 17:30:36 +0000 Thu, 16 Nov 2017 16:51:28 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 16 Nov 2017 17:30:36 +0000 Thu, 16 Nov 2017 16:51:28 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 16 Nov 2017 17:30:36 +0000 Thu, 16 Nov 2017 16:51:28 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Thu, 16 Nov 2017 17:30:36 +0000 Thu, 16 Nov 2017 16:51:28 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.0.0.6
Hostname: lab-kube-06
Capacity:
cpu: 2
memory: 8159076Ki
pods: 110
Allocatable:
cpu: 2
memory: 8056676Ki
pods: 110
System Info:
Machine ID: e198b57826ab4704a6526baea5fa1d06
System UUID: 05EF54CC-E8C8-874B-A708-BBC7BC140FF2
Boot ID: 3d64ad16-5603-42e9-bd34-84f6069ded5f
Kernel Version: 3.10.0-693.el7.x86_64
OS Image: Red Hat Enterprise Linux Server 7.4 (Maipo)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://Unknown
Kubelet Version: v1.8.3
Kube-Proxy Version: v1.8.3
ExternalID: lab-kube-06
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system etcd-lab-kube-06 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-apiserver-lab-kube-06 250m (12%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-controller-manager-lab-kube-06 200m (10%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-dns-545bc4bfd4-gmdvn 260m (13%) 0 (0%) 110Mi (1%) 170Mi (2%)
kube-system kube-proxy-68w8k 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-scheduler-lab-kube-06 100m (5%) 0 (0%) 0 (0%) 0 (0%)
kube-system weave-net-7zlbg 20m (1%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
830m (41%) 0 (0%) 110Mi (1%) 170Mi (2%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 39m kubelet, lab-kube-06 Starting kubelet.
Normal NodeAllocatableEnforced 39m kubelet, lab-kube-06 Updated Node Allocatable limit across pods
Normal NodeHasSufficientDisk 39m (x8 over 39m) kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 39m (x8 over 39m) kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 39m (x7 over 39m) kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasNoDiskPressure
Normal Starting 38m kube-proxy, lab-kube-06 Starting kube-proxy.
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pv-provisioning-demo Pending 14s
$ kubectl get events
LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
18m 18m 1 lab-kube-06.14f79f093119829a Node Normal Starting kubelet, lab-kube-06 Starting kubelet.
18m 18m 8 lab-kube-06.14f79f0931d0eb6e Node Normal NodeHasSufficientDisk kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasSufficientDisk
18m 18m 8 lab-kube-06.14f79f0931d1253e Node Normal NodeHasSufficientMemory kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasSufficientMemory
18m 18m 7 lab-kube-06.14f79f0931d131be Node Normal NodeHasNoDiskPressure kubelet, lab-kube-06 Node lab-kube-06 status is now: NodeHasNoDiskPressure
18m 18m 1 lab-kube-06.14f79f0932f3f1b0 Node Normal NodeAllocatableEnforced kubelet, lab-kube-06 Updated Node Allocatable limit across pods
18m 18m 1 lab-kube-06.14f79f122a32282d Node Normal RegisteredNode controllermanager Node lab-kube-06 event: Registered Node lab-kube-06 in Controller
17m 17m 1 lab-kube-06.14f79f1cdfc4c3b1 Node Normal Starting kube-proxy, lab-kube-06 Starting kube-proxy.
17m 17m 1 lab-kube-06.14f79f1d94ef1c17 Node Normal RegisteredNode controllermanager Node lab-kube-06 event: Registered Node lab-kube-06 in Controller
14m 14m 1 lab-kube-06.14f79f4b91cf73b3 Node Normal RegisteredNode controllermanager Node lab-kube-06 event: Registered Node lab-kube-06 in Controller
58s 11m 42 nfs-pv-provisioning-demo.14f79f766cf887f2 PersistentVolumeClaim Normal FailedBinding persistentvolume-controller no persistent volumes available for this claim and no storage class is set
14s 4m 20 nfs-server-kq44h.14f79fd21b9db5f9 Pod Warning FailedScheduling default-scheduler PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"
4m 4m 1 nfs-server.14f79fd21b946027 ReplicationController Normal SuccessfulCreate replication-controller Created pod: nfs-server-kq44h
2m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-server-kq44h 0/1 Pending 0 16s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-server-kq44h 0/1 Pending 0 26s
$ kubectl get rc
NAME DESIRED CURRENT READY AGE
nfs-server 1 1 0 40s
$ kubectl describe pods nfs-server-kq44h
Name: nfs-server-kq44h
Namespace: default
Node: <none>
Labels: role=nfs-server
Annotations: kubernetes.io/created-
by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"nfs-server","uid":"5653eb53-caf0-11e7-ac02-000d3a04eb...
Status: Pending
IP:
Created By: ReplicationController/nfs-server
Controlled By: ReplicationController/nfs-server
Containers:
nfs-server:
Image: gcr.io/google_containers/volume-nfs:0.8
Ports: 2049/TCP, 20048/TCP, 111/TCP
Environment: <none>
Mounts:
/exports from mypvc (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-plgv5 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
mypvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs-pv-provisioning-demo
ReadOnly: false
default-token-plgv5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-plgv5
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 39s (x22 over 5m) default-scheduler PersistentVolumeClaim is not bound: "nfs-pv-provisioning-demo"
Each Persistent Volume Claim (PVC) needs a Persistent Volume (PV) that it can bind to. In your example, you have only created a PVC, but not the volume itself.
A PV can either be created manually, or automatically by using a Volume class with a provisioner. Have a look at the docs of static and dynamic provisioning for more information):
There are two ways PVs may be provisioned: statically or dynamically.
Static
A cluster administrator creates a number of PVs. They carry the details of the real storage which is available for use by cluster users. [...]
Dynamic
When none of the static PVs the administrator created matches a user’s PersistentVolumeClaim, the cluster may try to dynamically provision a volume specially for the PVC. This provisioning is based on StorageClasses: the PVC must request a class and the administrator must have created and configured that class in order for dynamic provisioning to occur.
In your example, you are creating a storage class provisioner (defined in examples/volumes/nfs/provisioner/nfs-server-gce-pv.yaml) that seems to be tailored for usage within the Google cloud (which it will probably not be able to actually create PVs in your lab setup).
You can create a persistent volume manually on your own. After creating the PV, the PVC should automatically bind itself to the volume and your pods should start. Below is an example for a persistent volume that uses the node's local file system as a volume (which is probably OK for a one-node test setup):
apiVersion: v1
kind: PersistentVolume
metadata:
name: someVolume
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /path/on/host
For a production setup, you'll probably want to choose a different volume type at hostPath, although the volume types available to you will greatly differ depending on the environment that you're in (cloud or self-hosted/bare-metal).
Related
Vagrant, vm os: ubuntu/bionic64, swap disabled
Kubernetes version: 1.18.0
infrastructure: 1 haproxy node, 3 external etcd node and 3 kubernetes master node
Attempts: trying to setup ha rancher so I am setting up ha kubernetes cluster first using kubeadm by following the official doc
Expected behavior: all k8s components are up and be able to navigate to weave scope to see all nodes
Actual behavior: CoreDNS is still not ready even after installing CNI (Weave Net) so weave scope (the nice visualization ui) is not working unless networking is working properly (weave net and coredns).
# kubeadm config
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
controlPlaneEndpoint: "172.16.0.30:6443"
etcd:
external:
caFile: /etc/rancher-certs/ca-chain.cert.pem
keyFile: /etc/rancher-certs/etcd.key.pem
certFile: /etc/rancher-certs/etcd.cert.pem
endpoints:
- https://172.16.0.20:2379
- https://172.16.0.21:2379
- https://172.16.0.22:2379
-------------------------------------------------------------------------------
# firewall
vagrant#rancher-0:~$ sudo ufw status
Status: active
To Action From
-- ------ ----
OpenSSH ALLOW Anywhere
Anywhere ALLOW 172.16.0.0/26
OpenSSH (v6) ALLOW Anywhere (v6)
-------------------------------------------------------------------------------
# no swap
vagrant#rancher-0:~$ free -h
total used free shared buff/cache available
Mem: 1.9G 928M 97M 1.4M 966M 1.1G
Swap: 0B 0B 0B
k8s diagnostic output:
vagrant#rancher-0:~$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rancher-0 Ready master 14m v1.18.0 10.0.2.15 <none> Ubuntu 18.04.4 LTS 4.15.0-99-generic docker://19.3.12
rancher-1 Ready master 9m23s v1.18.0 10.0.2.15 <none> Ubuntu 18.04.4 LTS 4.15.0-99-generic docker://19.3.12
rancher-2 Ready master 4m26s v1.18.0 10.0.2.15 <none> Ubuntu 18.04.4 LTS 4.15.0-99-generic docker://19.3.12
vagrant#rancher-0:~$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cert-manager cert-manager ClusterIP 10.106.146.236 <none> 9402/TCP 17m
cert-manager cert-manager-webhook ClusterIP 10.102.162.87 <none> 443/TCP 17m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 18m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 18m
weave weave-scope-app NodePort 10.96.110.153 <none> 80:30276/TCP 17m
vagrant#rancher-0:~$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cert-manager cert-manager-bd9d585bd-x8qpb 0/1 Pending 0 16m <none> <none> <none> <none>
cert-manager cert-manager-cainjector-76c6657c55-d8fpj 0/1 Pending 0 16m <none> <none> <none> <none>
cert-manager cert-manager-webhook-64b9b4fdfd-sspjx 0/1 Pending 0 16m <none> <none> <none> <none>
kube-system coredns-66bff467f8-9z4f8 0/1 Running 0 10m 10.32.0.2 rancher-1 <none> <none>
kube-system coredns-66bff467f8-zkk99 0/1 Running 0 16m 10.32.0.2 rancher-0 <none> <none>
kube-system kube-apiserver-rancher-0 1/1 Running 0 16m 10.0.2.15 rancher-0 <none> <none>
kube-system kube-apiserver-rancher-1 1/1 Running 0 12m 10.0.2.15 rancher-1 <none> <none>
kube-system kube-apiserver-rancher-2 1/1 Running 0 7m23s 10.0.2.15 rancher-2 <none> <none>
kube-system kube-controller-manager-rancher-0 1/1 Running 0 16m 10.0.2.15 rancher-0 <none> <none>
kube-system kube-controller-manager-rancher-1 1/1 Running 0 12m 10.0.2.15 rancher-1 <none> <none>
kube-system kube-controller-manager-rancher-2 1/1 Running 0 7m24s 10.0.2.15 rancher-2 <none> <none>
kube-system kube-proxy-grts7 1/1 Running 0 12m 10.0.2.15 rancher-1 <none> <none>
kube-system kube-proxy-jv9lm 1/1 Running 0 16m 10.0.2.15 rancher-0 <none> <none>
kube-system kube-proxy-z2lrc 1/1 Running 0 7m25s 10.0.2.15 rancher-2 <none> <none>
kube-system kube-scheduler-rancher-0 1/1 Running 0 16m 10.0.2.15 rancher-0 <none> <none>
kube-system kube-scheduler-rancher-1 1/1 Running 0 12m 10.0.2.15 rancher-1 <none> <none>
kube-system kube-scheduler-rancher-2 1/1 Running 0 7m23s 10.0.2.15 rancher-2 <none> <none>
kube-system weave-net-nnvkd 2/2 Running 0 7m25s 10.0.2.15 rancher-2 <none> <none>
kube-system weave-net-pgxnq 2/2 Running 0 12m 10.0.2.15 rancher-1 <none> <none>
kube-system weave-net-q22bh 2/2 Running 0 16m 10.0.2.15 rancher-0 <none> <none>
weave weave-scope-agent-9gwj2 1/1 Running 0 16m 10.0.2.15 rancher-0 <none> <none>
weave weave-scope-agent-mznp7 1/1 Running 0 7m25s 10.0.2.15 rancher-2 <none> <none>
weave weave-scope-agent-v7jql 1/1 Running 0 12m 10.0.2.15 rancher-1 <none> <none>
weave weave-scope-app-bc7444d59-cjpd8 0/1 Pending 0 16m <none> <none> <none> <none>
weave weave-scope-cluster-agent-5c5dcc8cb-ln4hg 0/1 Pending 0 16m <none> <none> <none> <none>
vagrant#rancher-0:~$ kubectl describe node rancher-0
Name: rancher-0
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=rancher-0
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 28 Jul 2020 09:24:17 +0000
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: rancher-0
AcquireTime: <unset>
RenewTime: Tue, 28 Jul 2020 09:35:33 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Tue, 28 Jul 2020 09:24:47 +0000 Tue, 28 Jul 2020 09:24:47 +0000 WeaveIsUp Weave pod has set this
MemoryPressure False Tue, 28 Jul 2020 09:35:26 +0000 Tue, 28 Jul 2020 09:24:17 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 28 Jul 2020 09:35:26 +0000 Tue, 28 Jul 2020 09:24:17 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 28 Jul 2020 09:35:26 +0000 Tue, 28 Jul 2020 09:24:17 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 28 Jul 2020 09:35:26 +0000 Tue, 28 Jul 2020 09:24:52 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 10.0.2.15
Hostname: rancher-0
Capacity:
cpu: 2
ephemeral-storage: 10098432Ki
hugepages-2Mi: 0
memory: 2040812Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 9306714916
hugepages-2Mi: 0
memory: 1938412Ki
pods: 110
System Info:
Machine ID: 9b1bc8a8ef2c4e5b844624a36302d877
System UUID: A282600C-28F8-4D49-A9D3-6F05CA16865E
Boot ID: 77746bf5-7941-4e72-817e-24f149172158
Kernel Version: 4.15.0-99-generic
OS Image: Ubuntu 18.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.12
Kubelet Version: v1.18.0
Kube-Proxy Version: v1.18.0
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-66bff467f8-zkk99 100m (5%) 0 (0%) 70Mi (3%) 170Mi (8%) 11m
kube-system kube-apiserver-rancher-0 250m (12%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-controller-manager-rancher-0 200m (10%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-proxy-jv9lm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system kube-scheduler-rancher-0 100m (5%) 0 (0%) 0 (0%) 0 (0%) 11m
kube-system weave-net-q22bh 20m (1%) 0 (0%) 0 (0%) 0 (0%) 11m
weave weave-scope-agent-9gwj2 100m (5%) 0 (0%) 100Mi (5%) 2000Mi (105%) 11m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 770m (38%) 0 (0%)
memory 170Mi (8%) 2170Mi (114%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 11m kubelet, rancher-0 Starting kubelet.
Warning ImageGCFailed 11m kubelet, rancher-0 failed to get imageFs info: unable to find data in memory cache
Normal NodeHasSufficientMemory 11m (x3 over 11m) kubelet, rancher-0 Node rancher-0 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m (x3 over 11m) kubelet, rancher-0 Node rancher-0 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m (x2 over 11m) kubelet, rancher-0 Node rancher-0 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 11m kubelet, rancher-0 Updated Node Allocatable limit across pods
Normal Starting 11m kubelet, rancher-0 Starting kubelet.
Normal NodeHasSufficientMemory 11m kubelet, rancher-0 Node rancher-0 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 11m kubelet, rancher-0 Node rancher-0 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 11m kubelet, rancher-0 Node rancher-0 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 11m kubelet, rancher-0 Updated Node Allocatable limit across pods
Normal Starting 11m kube-proxy, rancher-0 Starting kube-proxy.
Normal NodeReady 10m kubelet, rancher-0 Node rancher-0 status is now: NodeReady
vagrant#rancher-0:~$ kubectl exec -n kube-system weave-net-nnvkd -c weave -- /home/weave/weave --local status
Version: 2.6.5 (failed to check latest version - see logs; next check at 2020/07/28 15:27:34)
Service: router
Protocol: weave 1..2
Name: 5a:40:7b:be:35:1d(rancher-2)
Encryption: disabled
PeerDiscovery: enabled
Targets: 0
Connections: 0
Peers: 1
TrustedSubnets: none
Service: ipam
Status: ready
Range: 10.32.0.0/12
DefaultSubnet: 10.32.0.0/12
vagrant#rancher-0:~$ kubectl logs weave-net-nnvkd -c weave -n kube-system
INFO: 2020/07/28 09:34:15.989759 Command line options: map[conn-limit:200 datapath:datapath db-prefix:/weavedb/weave-net docker-api: expect-npc:true host-root:/host http-addr:127.0.0.1:6784 ipalloc-init:consensus=0 ipalloc-range:10.32.0.0/12 metrics-addr:0.0.0.0:6782 name:5a:40:7b:be:35:1d nickname:rancher-2 no-dns:true port:6783]
INFO: 2020/07/28 09:34:15.989792 weave 2.6.5
INFO: 2020/07/28 09:34:16.178429 Bridge type is bridged_fastdp
INFO: 2020/07/28 09:34:16.178451 Communication between peers is unencrypted.
INFO: 2020/07/28 09:34:16.182442 Our name is 5a:40:7b:be:35:1d(rancher-2)
INFO: 2020/07/28 09:34:16.182499 Launch detected - using supplied peer list: []
INFO: 2020/07/28 09:34:16.196598 Checking for pre-existing addresses on weave bridge
INFO: 2020/07/28 09:34:16.204735 [allocator 5a:40:7b:be:35:1d] No valid persisted data
INFO: 2020/07/28 09:34:16.206236 [allocator 5a:40:7b:be:35:1d] Initialising via deferred consensus
INFO: 2020/07/28 09:34:16.206291 Sniffing traffic on datapath (via ODP)
INFO: 2020/07/28 09:34:16.210065 Listening for HTTP control messages on 127.0.0.1:6784
INFO: 2020/07/28 09:34:16.210471 Listening for metrics requests on 0.0.0.0:6782
INFO: 2020/07/28 09:34:16.275523 Error checking version: Get https://checkpoint-api.weave.works/v1/check/weave-net?arch=amd64&flag_docker-version=none&flag_kernel-version=4.15.0-99-generic&flag_kubernetes-cluster-size=0&flag_kubernetes-cluster-uid=aca5a8cc-27ca-4e8f-9964-4cf3971497c6&flag_kubernetes-version=v1.18.6&os=linux&signature=7uMaGpuc3%2F8ZtHqGoHyCnJ5VfOJUmnL%2FD6UZSqWYxKA%3D&version=2.6.5: dial tcp: lookup checkpoint-api.weave.works on 10.96.0.10:53: write udp 10.0.2.15:43742->10.96.0.10:53: write: operation not permitted
INFO: 2020/07/28 09:34:17.052454 [kube-peers] Added myself to peer list &{[{96:cd:5b:7f:65:73 rancher-1} {5a:40:7b:be:35:1d rancher-2}]}
DEBU: 2020/07/28 09:34:17.065599 [kube-peers] Nodes that have disappeared: map[96:cd:5b:7f:65:73:{96:cd:5b:7f:65:73 rancher-1}]
DEBU: 2020/07/28 09:34:17.065836 [kube-peers] Preparing to remove disappeared peer 96:cd:5b:7f:65:73
DEBU: 2020/07/28 09:34:17.079511 [kube-peers] Noting I plan to remove 96:cd:5b:7f:65:73
DEBU: 2020/07/28 09:34:17.095598 weave DELETE to http://127.0.0.1:6784/peer/96:cd:5b:7f:65:73 with map[]
INFO: 2020/07/28 09:34:17.097095 [kube-peers] rmpeer of 96:cd:5b:7f:65:73: 0 IPs taken over from 96:cd:5b:7f:65:73
DEBU: 2020/07/28 09:34:17.644909 [kube-peers] Nodes that have disappeared: map[]
INFO: 2020/07/28 09:34:17.658557 Assuming quorum size of 1
10.32.0.1
DEBU: 2020/07/28 09:34:17.761697 registering for updates for node delete events
vagrant#rancher-0:~$ kubectl logs coredns-66bff467f8-9z4f8 -n kube-system
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0728 09:31:10.764496 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go#v0.17.2/tools/cache/reflector.go:105 (started: 2020-07-28 09:30:40.763691008 +0000 UTC m=+0.308910646) (total time: 30.000692218s):
Trace[2019727887]: [30.000692218s] [30.000692218s] END
E0728 09:31:10.764526 1 reflector.go:153] pkg/mod/k8s.io/client-go#v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0728 09:31:10.764666 1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go#v0.17.2/tools/cache/reflector.go:105 (started: 2020-07-28 09:30:40.761333538 +0000 UTC m=+0.306553222) (total time: 30.00331917s):
Trace[1427131847]: [30.00331917s] [30.00331917s] END
E0728 09:31:10.764673 1 reflector.go:153] pkg/mod/k8s.io/client-go#v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
I0728 09:31:10.767435 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go#v0.17.2/tools/cache/reflector.go:105 (started: 2020-07-28 09:30:40.762085835 +0000 UTC m=+0.307305485) (total time: 30.005326233s):
Trace[939984059]: [30.005326233s] [30.005326233s] END
E0728 09:31:10.767569 1 reflector.go:153] pkg/mod/k8s.io/client-go#v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
...
vagrant#rancher-0:~$ kubectl describe pod coredns-66bff467f8-9z4f8 -n kube-system
Name: coredns-66bff467f8-9z4f8
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: rancher-1/10.0.2.15
Start Time: Tue, 28 Jul 2020 09:30:38 +0000
Labels: k8s-app=kube-dns
pod-template-hash=66bff467f8
Annotations: <none>
Status: Running
IP: 10.32.0.2
IPs:
IP: 10.32.0.2
Controlled By: ReplicaSet/coredns-66bff467f8
Containers:
coredns:
Container ID: docker://899cfd54a5281939dcb09eece96ff3024a3b4c444e982bda74b8334504a6a369
Image: k8s.gcr.io/coredns:1.6.7
Image ID: docker-pullable://k8s.gcr.io/coredns#sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
State: Running
Started: Tue, 28 Jul 2020 09:30:40 +0000
Ready: False
Restart Count: 0
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from coredns-token-znl2p (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
coredns-token-znl2p:
Type: Secret (a volume populated by a Secret)
SecretName: coredns-token-znl2p
Optional: false
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 28m default-scheduler Successfully assigned kube-system/coredns-66bff467f8-9z4f8 to rancher-1
Normal Pulled 28m kubelet, rancher-1 Container image "k8s.gcr.io/coredns:1.6.7" already present on machine
Normal Created 28m kubelet, rancher-1 Created container coredns
Normal Started 28m kubelet, rancher-1 Started container coredns
Warning Unhealthy 3m35s (x151 over 28m) kubelet, rancher-1 Readiness probe failed: HTTP probe failed with statuscode: 503
Edit 0:
The issue is solved, so the problem was that I configure ufw rule to allow cidr of my vms network but does not allow from kubernetes(from docker containers), so I configure ufw to allow certain ports documented from kubernetes website and ports documented from weave website so now the cluster is working as expected
As #shadowlegend said the issue is solved, so the problem was with configuration ufw rule to allow cidr of vms network but does not allow from kubernetes(from docker containers). Configure ufw to allow certain ports documented from kubernetes website and ports documented from weave website and the cluster will be working as expected.
Take a look: ufw-firewall-kubernetes.
NOTE:
Those same playbook work as expected on google cloud.
I am very begginer on kubernetes. Sorry if this a is dumb question.
I am using minikube and kvm2(5.0.0). Here is the info about minikube and kubectl version
Minikube status output
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured
kubectl cluster-info output:
Kubernetes master is running at https://127.0.0.1:32768
KubeDNS is running at https://127.0.0.1:32768/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
I am trying to deploy a pod using kubectl apply -f client-pod.yaml. Here is my client-pod.yaml configuration
apiVersion: v1
kind: Pod
metadata:
name: client-pod
labels:
component: web
spec:
containers:
- name: client
image: stephengrider/multi-client
ports:
- containerPort: 3000
This is the kubectl get pods output:
NAME READY STATUS RESTARTS AGE
client-pod 0/1 Pending 0 4m15s
kubectl describe pods output:
Name: client-pod
Namespace: default
Priority: 0
Node: <none>
Labels: component=web
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"component":"web"},"name":"client-pod","namespace":"default"},"spec...
Status: Pending
IP:
IPs: <none>
Containers:
client:
Image: stephengrider/multi-client
Port: 3000/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-z45bq (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-z45bq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-z45bq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 node(s) had taints that the pod didn't tolerate.
I have been searching for a way to see which taints is stopping to pod to initialize wihout luck.
Is there a way to see the taint that is failing?
kubectl get nodes output:
NAME STATUS ROLES AGE VERSION
m01 Ready master 11h v1.17.3
-- EDIT --
kubectl describe nodes output:
Name: home-pc
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=home-pc
kubernetes.io/os=linux
minikube.k8s.io/commit=eb13446e786c9ef70cb0a9f85a633194e62396a1
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2020_03_17T22_51_28_0700
minikube.k8s.io/version=v1.8.2
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Tue, 17 Mar 2020 22:51:25 -0500
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: home-pc
AcquireTime: <unset>
RenewTime: Tue, 17 Mar 2020 22:51:41 -0500
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 17 Mar 2020 22:51:41 -0500 Tue, 17 Mar 2020 22:51:21 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 17 Mar 2020 22:51:41 -0500 Tue, 17 Mar 2020 22:51:21 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 17 Mar 2020 22:51:41 -0500 Tue, 17 Mar 2020 22:51:21 -0500 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 17 Mar 2020 22:51:41 -0500 Tue, 17 Mar 2020 22:51:41 -0500 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 192.168.0.12
Hostname: home-pc
Capacity:
cpu: 12
ephemeral-storage: 227688908Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8159952Ki
pods: 110
Allocatable:
cpu: 12
ephemeral-storage: 209838097266
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8057552Ki
pods: 110
System Info:
Machine ID: 339d426453b4492da92f75d06acc1e0d
System UUID: 62eedb55-444f-61ce-75e9-b06ebf3331a0
Boot ID: a9ae9889-d7cb-48c5-ae75-b2052292ac7a
Kernel Version: 5.0.0-38-generic
OS Image: Ubuntu 19.04
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.5
Kubelet Version: v1.17.3
Kube-Proxy Version: v1.17.3
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-6955765f44-mbwqt 100m (0%) 0 (0%) 70Mi (0%) 170Mi (2%) 10s
kube-system coredns-6955765f44-sblf2 100m (0%) 0 (0%) 70Mi (0%) 170Mi (2%) 10s
kube-system etcd-home-pc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 13s
kube-system kube-apiserver-home-pc 250m (2%) 0 (0%) 0 (0%) 0 (0%) 13s
kube-system kube-controller-manager-home-pc 200m (1%) 0 (0%) 0 (0%) 0 (0%) 13s
kube-system kube-proxy-lk7xs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10s
kube-system kube-scheduler-home-pc 100m (0%) 0 (0%) 0 (0%) 0 (0%) 12s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (6%) 0 (0%)
memory 140Mi (1%) 340Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 24s kubelet, home-pc Starting kubelet.
Normal NodeHasSufficientMemory 23s (x4 over 24s) kubelet, home-pc Node home-pc status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 23s (x3 over 24s) kubelet, home-pc Node home-pc status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 23s (x3 over 24s) kubelet, home-pc Node home-pc status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 23s kubelet, home-pc Updated Node Allocatable limit across pods
Normal Starting 13s kubelet, home-pc Starting kubelet.
Normal NodeHasSufficientMemory 13s kubelet, home-pc Node home-pc status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 13s kubelet, home-pc Node home-pc status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 13s kubelet, home-pc Node home-pc status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 13s kubelet, home-pc Updated Node Allocatable limit across pods
Normal Starting 9s kube-proxy, home-pc Starting kube-proxy.
Normal NodeReady 3s kubelet, home-pc Node home-pc status is now: NodeReady
You have some taints on the node which is stopping the scheduler from deploying the pod.Either remove the taint from master node or add tolerations in the pod spec.
I have just terminated a AWS K8S node, and now.
K8S recreated a new one, and installed new pods. Everything seems good so far.
But when I do:
kubectl get po -A
I get:
kube-system cluster-autoscaler-648b4df947-42hxv 0/1 Evicted 0 3m53s
kube-system cluster-autoscaler-648b4df947-45pcc 0/1 Evicted 0 47m
kube-system cluster-autoscaler-648b4df947-46w6h 0/1 Evicted 0 91m
kube-system cluster-autoscaler-648b4df947-4tlbl 0/1 Evicted 0 69m
kube-system cluster-autoscaler-648b4df947-52295 0/1 Evicted 0 3m54s
kube-system cluster-autoscaler-648b4df947-55wzb 0/1 Evicted 0 83m
kube-system cluster-autoscaler-648b4df947-57kv5 0/1 Evicted 0 107m
kube-system cluster-autoscaler-648b4df947-69rsl 0/1 Evicted 0 98m
kube-system cluster-autoscaler-648b4df947-6msx2 0/1 Evicted 0 11m
kube-system cluster-autoscaler-648b4df947-6pphs 0 18m
kube-system dns-controller-697f6d9457-zswm8 0/1 Evicted 0 54m
When I do:
kubectl describe pod -n kube-system dns-controller-697f6d9457-zswm8
I get:
➜ monitoring git:(master) ✗ kubectl describe pod -n kube-system dns-controller-697f6d9457-zswm8
Name: dns-controller-697f6d9457-zswm8
Namespace: kube-system
Priority: 0
Node: ip-172-20-57-13.eu-west-3.compute.internal/
Start Time: Mon, 07 Oct 2019 12:35:06 +0200
Labels: k8s-addon=dns-controller.addons.k8s.io
k8s-app=dns-controller
pod-template-hash=697f6d9457
version=v1.12.0
Annotations: scheduler.alpha.kubernetes.io/critical-pod:
Status: Failed
Reason: Evicted
Message: The node was low on resource: ephemeral-storage. Container dns-controller was using 48Ki, which exceeds its request of 0.
IP:
IPs: <none>
Controlled By: ReplicaSet/dns-controller-697f6d9457
Containers:
dns-controller:
Image: kope/dns-controller:1.12.0
Port: <none>
Host Port: <none>
Command:
/usr/bin/dns-controller
--watch-ingress=false
--dns=aws-route53
--zone=*/ZDOYTALGJJXCM
--zone=*/*
-v=2
Requests:
cpu: 50m
memory: 50Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from dns-controller-token-gvxxd (ro)
Volumes:
dns-controller-token-gvxxd:
Type: Secret (a volume populated by a Secret)
SecretName: dns-controller-token-gvxxd
Optional: false
QoS Class: Burstable
Node-Selectors: node-role.kubernetes.io/master=
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Evicted 59m kubelet, ip-172-20-57-13.eu-west-3.compute.internal The node was low on resource: ephemeral-storage. Container dns-controller was using 48Ki, which exceeds its request of 0.
Normal Killing 59m kubelet, ip-172-20-57-13.eu-west-3.compute.internal Killing container with id docker://dns-controller:Need to kill Pod
And:
➜ monitoring git:(master) ✗ kubectl describe pod -n kube-system cluster-autoscaler-648b4df947-2zcrz
Name: cluster-autoscaler-648b4df947-2zcrz
Namespace: kube-system
Priority: 0
Node: ip-172-20-57-13.eu-west-3.compute.internal/
Start Time: Mon, 07 Oct 2019 13:26:26 +0200
Labels: app=cluster-autoscaler
k8s-addon=cluster-autoscaler.addons.k8s.io
pod-template-hash=648b4df947
Annotations: prometheus.io/port: 8085
prometheus.io/scrape: true
scheduler.alpha.kubernetes.io/tolerations: [{"key":"dedicated", "value":"master"}]
Status: Failed
Reason: Evicted
Message: Pod The node was low on resource: [DiskPressure].
IP:
IPs: <none>
Controlled By: ReplicaSet/cluster-autoscaler-648b4df947
Containers:
cluster-autoscaler:
Image: gcr.io/google-containers/cluster-autoscaler:v1.15.1
Port: <none>
Host Port: <none>
Command:
./cluster-autoscaler
--v=4
--stderrthreshold=info
--cloud-provider=aws
--skip-nodes-with-local-storage=false
--nodes=0:1:pamela-nodes.k8s-prod.sunchain.fr
Limits:
cpu: 100m
memory: 300Mi
Requests:
cpu: 100m
memory: 300Mi
Liveness: http-get http://:8085/health-check delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8085/health-check delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
AWS_REGION: eu-west-3
Mounts:
/etc/ssl/certs/ca-certificates.crt from ssl-certs (ro)
/var/run/secrets/kubernetes.io/serviceaccount from cluster-autoscaler-token-hld2m (ro)
Volumes:
ssl-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs/ca-certificates.crt
HostPathType:
cluster-autoscaler-token-hld2m:
Type: Secret (a volume populated by a Secret)
SecretName: cluster-autoscaler-token-hld2m
Optional: false
QoS Class: Guaranteed
Node-Selectors: kubernetes.io/role=master
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned kube-system/cluster-autoscaler-648b4df947-2zcrz to ip-172-20-57-13.eu-west-3.compute.internal
Warning Evicted 11m kubelet, ip-172-20-57-13.eu-west-3.compute.internal The node was low on resource: [DiskPressure].
It seems to be a ressource issue. Weird thing is before I killed my EC2 instance, I didn t have this issue.
Why is it happening and what should I do? Is it mandatory to add more ressources ?
➜ scripts kubectl describe node ip-172-20-57-13.eu-west-3.compute.internal
Name: ip-172-20-57-13.eu-west-3.compute.internal
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=t3.small
beta.kubernetes.io/os=linux
failure-domain.beta.kubernetes.io/region=eu-west-3
failure-domain.beta.kubernetes.io/zone=eu-west-3a
kops.k8s.io/instancegroup=master-eu-west-3a
kubernetes.io/hostname=ip-172-20-57-13.eu-west-3.compute.internal
kubernetes.io/role=master
node-role.kubernetes.io/master=
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 28 Aug 2019 09:38:09 +0200
Taints: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/disk-pressure:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Wed, 28 Aug 2019 09:38:36 +0200 Wed, 28 Aug 2019 09:38:36 +0200 RouteCreated RouteController created a route
OutOfDisk False Mon, 07 Oct 2019 14:14:32 +0200 Wed, 28 Aug 2019 09:38:09 +0200 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Mon, 07 Oct 2019 14:14:32 +0200 Wed, 28 Aug 2019 09:38:09 +0200 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure True Mon, 07 Oct 2019 14:14:32 +0200 Mon, 07 Oct 2019 14:11:02 +0200 KubeletHasDiskPressure kubelet has disk pressure
PIDPressure False Mon, 07 Oct 2019 14:14:32 +0200 Wed, 28 Aug 2019 09:38:09 +0200 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 07 Oct 2019 14:14:32 +0200 Wed, 28 Aug 2019 09:38:35 +0200 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 172.20.57.13
ExternalIP: 35.180.187.101
InternalDNS: ip-172-20-57-13.eu-west-3.compute.internal
Hostname: ip-172-20-57-13.eu-west-3.compute.internal
ExternalDNS: ec2-35-180-187-101.eu-west-3.compute.amazonaws.com
Capacity:
attachable-volumes-aws-ebs: 25
cpu: 2
ephemeral-storage: 7797156Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 2013540Ki
pods: 110
Allocatable:
attachable-volumes-aws-ebs: 25
cpu: 2
ephemeral-storage: 7185858958
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 1911140Ki
pods: 110
System Info:
Machine ID: ec2b3aa5df0e3ad288d210f309565f06
System UUID: EC2B3AA5-DF0E-3AD2-88D2-10F309565F06
Boot ID: f9d5417b-eba9-4544-9710-a25d01247b46
Kernel Version: 4.9.0-9-amd64
OS Image: Debian GNU/Linux 9 (stretch)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.6.3
Kubelet Version: v1.12.10
Kube-Proxy Version: v1.12.10
PodCIDR: 100.96.1.0/24
ProviderID: aws:///eu-west-3a/i-03bf1b26313679d65
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-manager-events-ip-172-20-57-13.eu-west-3.compute.internal 100m (5%) 0 (0%) 100Mi (5%) 0 (0%) 40d
kube-system etcd-manager-main-ip-172-20-57-13.eu-west-3.compute.internal 200m (10%) 0 (0%) 100Mi (5%) 0 (0%) 40d
kube-system kube-apiserver-ip-172-20-57-13.eu-west-3.compute.internal 150m (7%) 0 (0%) 0 (0%) 0 (0%) 40d
kube-system kube-controller-manager-ip-172-20-57-13.eu-west-3.compute.internal 100m (5%) 0 (0%) 0 (0%) 0 (0%) 40d
kube-system kube-proxy-ip-172-20-57-13.eu-west-3.compute.internal 100m (5%) 0 (0%) 0 (0%) 0 (0%) 40d
kube-system kube-scheduler-ip-172-20-57-13.eu-west-3.compute.internal 100m (5%) 0 (0%) 0 (0%) 0 (0%) 40d
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 750m (37%) 0 (0%)
memory 200Mi (10%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
attachable-volumes-aws-ebs 0 0
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasNoDiskPressure 55m (x324 over 40d) kubelet, ip-172-20-57-13.eu-west-3.compute.internal Node ip-172-20-57-13.eu-west-3.compute.internal status is now: NodeHasNoDiskPressure
Warning EvictionThresholdMet 10m (x1809 over 16d) kubelet, ip-172-20-57-13.eu-west-3.compute.internal Attempting to reclaim ephemeral-storage
Warning ImageGCFailed 4m30s (x6003 over 23d) kubelet, ip-172-20-57-13.eu-west-3.compute.internal (combined from similar events): wanted to free 652348620 bytes, but freed 0 bytes space with errors in image deletion: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete dd37681076e1 (cannot be forced) - image is being used by running container b1800146af29
I think a better command to debug it is:
devops git:(master) ✗ kubectl get events --sort-by=.metadata.creationTimestamp -o wide
LAST SEEN TYPE REASON KIND SOURCE MESSAGE SUBOBJECT FIRST SEEN COUNT NAME
10m Warning ImageGCFailed Node kubelet, ip-172-20-57-13.eu-west-3.compute.internal (combined from similar events): wanted to free 653307084 bytes, but freed 0 bytes space with errors in image deletion: rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete dd37681076e1 (cannot be forced) - image is being used by running container b1800146af29 23d 6004 ip-172-20-57-13.eu-west-3.compute.internal.15c4124e15eb1d33
2m59s Warning ImageGCFailed Node kubelet, ip-172-20-36-135.eu-west-3.compute.internal (combined from similar events): failed to garbage collect required amount of images. Wanted to free 639524044 bytes, but freed 0 bytes 7d9h 2089 ip-172-20-36-135.eu-west-3.compute.internal.15c916d24afe2c25
4m59s Warning ImageGCFailed Node kubelet, ip-172-20-33-81.eu-west-3.compute.internal (combined from similar events): failed to garbage collect required amount of images. Wanted to free 458296524 bytes, but freed 0 bytes 4d14h 1183 ip-172-20-33-81.eu-west-3.compute.internal.15c9f3fe4e1525ec
6m43s Warning EvictionThresholdMet Node kubelet, ip-172-20-57-13.eu-west-3.compute.internal Attempting to reclaim ephemeral-storage 16d 1841 ip-172-20-57-13.eu-west-3.compute.internal.15c66e349b761219
41s Normal NodeHasNoDiskPressure Node kubelet, ip-172-20-57-13.eu-west-3.compute.internal Node ip-172-20-57-13.eu-west-3.compute.internal status is now: NodeHasNoDiskPressure 40d 333 ip-172-20-57-13.eu-west-3.compute.internal.15bf05cec37981b6
Now df -h
admin#ip-172-20-57-13:/var/log$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 972M 0 972M 0% /dev
tmpfs 197M 2.3M 195M 2% /run
/dev/nvme0n1p2 7.5G 6.4G 707M 91% /
tmpfs 984M 0 984M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 984M 0 984M 0% /sys/fs/cgroup
/dev/nvme1n1 20G 430M 20G 3% /mnt/master-vol-09618123eb79d92c8
/dev/nvme2n1 20G 229M 20G 2% /mnt/master-vol-05c9684f0edcbd876
It looks like your nodes/master is running low on storage? I see only 1GB for ephemeral storage available.
You should free up some space on the node and master. It should get rid of your problem.
I just installed Kubernetes on a cluster of Ubuntu's using the explanation of digital ocean with Ansible. Everything seems to be fine; but when verifying the cluster, the master node is in a not ready status:
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
jwdkube-master-01 NotReady master 44m v1.12.2
jwdkube-worker-01 Ready <none> 44m v1.12.2
jwdkube-worker-02 Ready <none> 44m v1.12.2
This is the version:
# kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:43:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
When I check the master node, the kube-proxy is hanging in a starting mode:
# kubectl describe nodes jwdkube-master-01
Name: jwdkube-master-01
Roles: master
...
LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Thu, 08 Nov 2018 10:24:45 +0000 Thu, 08 Nov 2018 09:36:10 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 104.248.207.107
Hostname: jwdkube-master-01
Capacity:
cpu: 1
ephemeral-storage: 25226960Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 1008972Ki
pods: 110
Allocatable:
cpu: 1
ephemeral-storage: 23249166298
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 906572Ki
pods: 110
System Info:
Machine ID: 771c0f669c0a40a1ba7c28bf1f05a637
System UUID: 771c0f66-9c0a-40a1-ba7c-28bf1f05a637
Boot ID: 2532ae4d-c08c-45d8-b94c-6e88912ed627
Kernel Version: 4.18.0-10-generic
OS Image: Ubuntu 18.10
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.6.1
Kubelet Version: v1.12.2
Kube-Proxy Version: v1.12.2
PodCIDR: 10.244.0.0/24
Non-terminated Pods: (5 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system etcd-jwdkube-master-01 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-apiserver-jwdkube-master-01 250m (25%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-controller-manager-jwdkube-master-01 200m (20%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-proxy-p8cbq 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-scheduler-jwdkube-master-01 100m (10%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 550m (55%) 0 (0%)
memory 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientDisk 48m (x6 over 48m) kubelet, jwdkube-master-01 Node jwdkube-master-01 status is now: NodeHasSufficientDisk
Normal NodeHasSufficientMemory 48m (x6 over 48m) kubelet, jwdkube-master-01 Node jwdkube-master-01 status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 48m (x6 over 48m) kubelet, jwdkube-master-01 Node jwdkube-master-01 status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 48m (x5 over 48m) kubelet, jwdkube-master-01 Node jwdkube-master-01 status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 48m kubelet, jwdkube-master-01 Updated Node Allocatable limit across pods
Normal Starting 48m kube-proxy, jwdkube-master-01 Starting kube-proxy.
update
running kubectl get pods -n kube-system:
NAME READY STATUS RESTARTS AGE
coredns-576cbf47c7-8p7k2 1/1 Running 0 4h47m
coredns-576cbf47c7-s5tlv 1/1 Running 0 4h47m
etcd-jwdkube-master-01 1/1 Running 1 140m
kube-apiserver-jwdkube-master-01 1/1 Running 1 140m
kube-controller-manager-jwdkube-master-01 1/1 Running 1 140m
kube-flannel-ds-5bzrx 1/1 Running 0 4h47m
kube-flannel-ds-bfs9k 1/1 Running 0 4h47m
kube-proxy-4lrzw 1/1 Running 1 4h47m
kube-proxy-57x28 1/1 Running 0 4h47m
kube-proxy-j8bf5 1/1 Running 0 4h47m
kube-scheduler-jwdkube-master-01 1/1 Running 1 140m
tiller-deploy-6f6fd74b68-5xt54 1/1 Running 0 112m
It seems to be a problem of Flannel v0.9.1 compatibility with Kubernetes cluster v1.12.2. Once you replace URL in master configuration playbook it should help:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/
kube-flannel.yml
To enforce this solution on the current cluster:
On the master node delete relevant objects for Flannel v0.9.1:
kubectl delete clusterrole flannel -n kube-system
kubectl delete clusterrolebinding flannel-n kube-system
kubectl delete clusterrolebinding flannel -n kube-system
kubectl delete serviceaccount flannel -n kube-system
kubectl delete configmap kube-flannel-cfg -n kube-system
kubectl delete daemonset.extensions kube-flannel-ds -n kube-system
Proceed also with Flannel Pods deletion:
kubectl delete pod kube-flannel-ds-5bzrx -n kube-system
kubectl delete pod kube-flannel-ds-bfs9k -n kube-system
And check whether no more objects related to Flannel exist:
kubectl get all --all-namespaces
Install the latest Flannel version to your cluster:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
For me it works, however if you discover any further problems write a comment below this answer.
I've set up a Kubernetes cluster with three nodes, i get all my nodes status ready, but the scheduler seems not find one of them. How could this happen.
[root#master1 app]# kubectl get nodes
NAME LABELS STATUS AGE
172.16.0.44 kubernetes.io/hostname=172.16.0.44,pxc=node1 Ready 8d
172.16.0.45 kubernetes.io/hostname=172.16.0.45 Ready 8d
172.16.0.46 kubernetes.io/hostname=172.16.0.46 Ready 8d
I use nodeSelect in my RC file like thie:
nodeSelector:
pxc: node1
describe the rc:
Name: mongo-controller
Namespace: kube-system
Image(s): mongo
Selector: k8s-app=mongo
Labels: k8s-app=mongo
Replicas: 1 current / 1 desired
Pods Status: 0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Volumes:
mongo-persistent-storage:
Type: HostPath (bare host directory volume)
Path: /k8s/mongodb
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
25m 25m 1 {replication-controller } SuccessfulCreate Created pod: mongo-controller-0wpwu
get pods to be pending:
[root#master1 app]# kubectl get pods mongo-controller-0wpwu --namespace=kube-system
NAME READY STATUS RESTARTS AGE
mongo-controller-0wpwu 0/1 Pending 0 27m
describe pod mongo-controller-0wpwu:
[root#master1 app]# kubectl describe pod mongo-controller-0wpwu --namespace=kube-system
Name: mongo-controller-0wpwu
Namespace: kube-system
Image(s): mongo
Node: /
Labels: k8s-app=mongo
Status: Pending
Reason:
Message:
IP:
Replication Controllers: mongo-controller (1/1 replicas created)
Containers:
mongo:
Container ID:
Image: mongo
Image ID:
QoS Tier:
cpu: BestEffort
memory: BestEffort
State: Waiting
Ready: False
Restart Count: 0
Environment Variables:
Volumes:
mongo-persistent-storage:
Type: HostPath (bare host directory volume)
Path: /k8s/mongodb
default-token-7qjcu:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-7qjcu
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
22m 37s 12 {default-scheduler } FailedScheduling pod (mongo-controller-0wpwu) failed to fit in any node
fit failure on node (172.16.0.46): MatchNodeSelector
fit failure on node (172.16.0.45): MatchNodeSelector
27m 9s 67 {default-scheduler } FailedScheduling pod (mongo-controller-0wpwu) failed to fit in any node
fit failure on node (172.16.0.45): MatchNodeSelector
fit failure on node (172.16.0.46): MatchNodeSelector
See the ip list in events, The 172.16.0.44 seems not seen by the scheduler? How could the happen?
describe the node 172.16.0.44
[root#master1 app]# kubectl describe nodes --namespace=kube-system
Name: 172.16.0.44
Labels: kubernetes.io/hostname=172.16.0.44,pxc=node1
CreationTimestamp: Wed, 30 Mar 2016 15:58:47 +0800
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
──── ────── ───────────────── ────────────────── ────── ───────
Ready True Fri, 08 Apr 2016 12:18:01 +0800 Fri, 08 Apr 2016 11:18:52 +0800 KubeletReady kubelet is posting ready status
OutOfDisk Unknown Wed, 30 Mar 2016 15:58:47 +0800 Thu, 07 Apr 2016 17:38:50 +0800 NodeStatusNeverUpdated Kubelet never posted node status.
Addresses: 172.16.0.44,172.16.0.44
Capacity:
cpu: 2
memory: 7748948Ki
pods: 40
System Info:
Machine ID: 45461f76679f48ee96e95da6cc798cc8
System UUID: 2B850D4F-953C-4C20-B182-66E17D5F6461
Boot ID: 40d2cd8d-2e46-4fef-92e1-5fba60f57965
Kernel Version: 3.10.0-123.9.3.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Container Runtime Version: docker://1.10.1
Kubelet Version: v1.2.0
Kube-Proxy Version: v1.2.0
ExternalID: 172.16.0.44
Non-terminated Pods: (1 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
───────── ──── ──────────── ────────── ─────────────── ─────────────
kube-system kube-registry-proxy-172.16.0.44 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%)
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests CPU Limits Memory Requests Memory Limits
──────────── ────────── ─────────────── ─────────────
100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%)
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
59m 59m 1 {kubelet 172.16.0.44} Starting Starting kubelet.
Ssh login 44, i get the disk space is free(i also remove some docker images and containers):
[root#iZ25dqhvvd0Z ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 40G 2.6G 35G 7% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.7G 0 3.7G 0% /dev/shm
tmpfs 3.7G 143M 3.6G 4% /run
tmpfs 3.7G 0 3.7G 0% /sys/fs/cgroup
/dev/xvdb 40G 361M 37G 1% /k8s
Still docker logs scheduler(v1.3.0-alpha.1 ) get this
E0408 05:28:42.679448 1 factory.go:387] Error scheduling kube-system mongo-controller-0wpwu: pod (mongo-controller-0wpwu) failed to fit in any node
fit failure on node (172.16.0.45): MatchNodeSelector
fit failure on node (172.16.0.46): MatchNodeSelector
; retrying
I0408 05:28:42.679577 1 event.go:216] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"mongo-controller-0wpwu", UID:"2d0f0844-fd3c-11e5-b531-00163e000727", APIVersion:"v1", ResourceVersion:"634139", FieldPath:""}): type: 'Warning' reason: 'FailedScheduling' pod (mongo-controller-0wpwu) failed to fit in any node
fit failure on node (172.16.0.45): MatchNodeSelector
fit failure on node (172.16.0.46): MatchNodeSelector
Thanks for your replay Robert. i got this resolve by doing below:
kubectl delete rc
kubectl delete node 172.16.0.44
stop kubelet in 172.16.0.44
rm -rf /k8s/*
restart kubelet
Now the node is ready, and out of disk is gone.
Name: 172.16.0.44
Labels: kubernetes.io/hostname=172.16.0.44,pxc=node1
CreationTimestamp: Fri, 08 Apr 2016 15:14:51 +0800
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
──── ────── ───────────────── ────────────────── ────── ───────
Ready True Fri, 08 Apr 2016 15:25:33 +0800 Fri, 08 Apr 2016 15:14:50 +0800 KubeletReady kubelet is posting ready status
Addresses: 172.16.0.44,172.16.0.44
Capacity:
cpu: 2
memory: 7748948Ki
pods: 40
System Info:
Machine ID: 45461f76679f48ee96e95da6cc798cc8
System UUID: 2B850D4F-953C-4C20-B182-66E17D5F6461
Boot ID: 40d2cd8d-2e46-4fef-92e1-5fba60f57965
Kernel Version: 3.10.0-123.9.3.el7.x86_64
OS Image: CentOS Linux 7 (Core)
I found this https://github.com/kubernetes/kubernetes/issues/4135, but still don't know why my disk space is free and kubelet thinks it is out of disk...
The reason the scheduler failed is that there wasn't space to fit the pod onto the node. If you look at the conditions for your node, it says that the OutOfDisk condition is Unknown. The scheduler is probably not willing to place on pod onto a node that it thinks doesn't have available disk space.
We had the same issue in AWS when they changed DNS from IP=DNS name to IP=IP.eu-central: Nodes showed ready but where not reachable via their name.