Why doesn't 'journalctl -u kubelet -f' command get all logs? - kubernetes

When i command
journalctl -f
I can get below logs.
Jan 20 16:28:49 node1 kubelet[1237]: I0120 16:28:49.858522 1237 scope.go:110] "RemoveContainer" containerID="6ff68682a6151eaecce82b16ca6bbc23ce44e71aedd871e5816dec1989a6ac7d"
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.859688275+09:00" level=info msg="RemoveContainer for \"6ff68682a6151eaecce82b16ca6bbc23ce44e71aedd871e5816dec1989a6ac7d\""
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.866650422+09:00" level=info msg="RemoveContainer for \"6ff68682a6151eaecce82b16ca6bbc23ce44e71aedd871e5816dec1989a6ac7d\" returns successfully"
Jan 20 16:28:49 node1 kubelet[1237]: I0120 16:28:49.866961 1237 scope.go:110] "RemoveContainer" containerID="f205217c8ed1ca6303d9035e95584af96708d07600887d2b4254d1080389dfbd"
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.868036395+09:00" level=info msg="RemoveContainer for \"f205217c8ed1ca6303d9035e95584af96708d07600887d2b4254d1080389dfbd\""
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.872289374+09:00" level=info msg="RemoveContainer for \"f205217c8ed1ca6303d9035e95584af96708d07600887d2b4254d1080389dfbd\" returns successfully"
Jan 20 16:28:49 node1 kubelet[1237]: I0120 16:28:49.872457 1237 scope.go:110] "RemoveContainer" containerID="b517943a97621ec70c3bbf95d4e6caa9c109ceba19eb013abdfeb252682db634"
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.873342572+09:00" level=info msg="RemoveContainer for \"b517943a97621ec70c3bbf95d4e6caa9c109ceba19eb013abdfeb252682db634\""
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.877366463+09:00" level=info msg="RemoveContainer for \"b517943a97621ec70c3bbf95d4e6caa9c109ceba19eb013abdfeb252682db634\" returns successfully"
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.879261150+09:00" level=info msg="StopPodSandbox for \"b08cbdab3744ff66a176a26643e59ec6b925082af7802dc9ea8dea29b6695331\""
However, when i command with u option
journalctl -u kubelet -f
I can't get recent logs.
I just can get logs 1 days before
Jan 19 03:01:32 nodekubelet[1237]: I0119 03:01:32.192834 1237 event.go:294] "Event occurred" object="kube-system/nginx-proxy-node" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="SandboxChanged" message="Pod sandbox changed, it will be killed and re-created."
Jan 19 03:01:32 nodekubelet[1237]: I0119 03:01:32.212913 1237 reflector.go:255] Listing and watching *v1.Service from vendor/k8s.io/client-go/informers/factory.go:134
Jan 19 03:01:32 nodekubelet[1237]: E0119 03:01:32.213013 1237 kubelet.go:2424] "Error getting node" err="node \"node\" not found"
Jan 19 03:01:32 nodekubelet[1237]: W0119 03:01:32.213540 1237 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://localhost:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
Jan 19 03:01:32 nodekubelet[1237]: E0119 03:01:32.213598 1237 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://localhost:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
Why can't i get entire logs(journalctl -f) about kubelet?
What is difference between two commands?

Related

kubelet-check: timed out waiting for the condition

I am trying to setup Kubernetes on a Raspberry Pi cluster. After installing Docker and Kubernetes, I tried to initialized the master node with the command:
sudo kubeadm init --apiserver-advertise-address=192.168.22.10
(192.168.22.10 is the IP address of this Raspberry Pi)
And I came across the error
error http://thyrsi.com/t6/660/1548094273x2890171671.png
The Docker version is 18.09. The system is Raspbian Lite(2018-11-15). The Kubernetes version is 1.13.2.
I tried the command:
systemctl status kubelet
Then I got the following contents:
* kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
`-10-kubeadm.conf
Active: active (running) since Tue 2019-01-22 01:17:57 CST; 7min ago
Docs: https://kubernetes.io/docs/home/
Main PID: 1091 (kubelet)
Memory: 31.9M
CPU: 1min 7.370s
CGroup: /system.slice/kubelet.service
`-1091 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.1
Jan 22 01:25:16 raspberrypi6 kubelet[1091]: I0122 01:25:16.563963 1091 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Jan 22 01:25:16 raspberrypi6 kubelet[1091]: E0122 01:25:16.581094 1091 pod_workers.go:190] Error syncing pod b5725949b6a8661a393eba83efb9c2e0 ("kube-controller-manager-raspberrypi6_kube-system(b5725949b6a8661a393eba83efb9c2e0)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-raspberrypi6_kube-system(b5725949b6a8661a393eba83efb9c2e0)"
Jan 22 01:25:16 raspberrypi6 kubelet[1091]: E0122 01:25:16.592811 1091 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://192.168.22.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Draspberrypi6&limit=500&resourceVersion=0: dial tcp 192.168.22.10:6443: connect: connection refused
Jan 22 01:25:16 raspberrypi6 kubelet[1091]: E0122 01:25:16.601066 1091 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168.22.10:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.22.10:6443: connect: connection refused
Jan 22 01:25:16 raspberrypi6 kubelet[1091]: E0122 01:25:16.621853 1091 kubelet.go:2266] node "raspberrypi6" not found
Jan 22 01:25:16 raspberrypi6 kubelet[1091]: E0122 01:25:16.654152 1091 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.22.10:6443/api/v1/pods?fieldSelector=spec.nodeName%3Draspberrypi6&limit=500&resourceVersion=0: dial tcp 192.168.22.10:6443: connect: connection refused
Jan 22 01:25:16 raspberrypi6 kubelet[1091]: E0122 01:25:16.722216 1091 kubelet.go:2266] node "raspberrypi6" not found
Jan 22 01:25:16 raspberrypi6 kubelet[1091]: E0122 01:25:16.822567 1091 kubelet.go:2266] node "raspberrypi6" not found
Jan 22 01:25:16 raspberrypi6 kubelet[1091]: E0122 01:25:16.922899 1091 kubelet.go:2266] node "raspberrypi6" not found
Jan 22 01:25:17 raspberrypi6 kubelet[1091]: E0122 01:25:17.023216 1091 kubelet.go:2266] node "raspberrypi6" not found
I have searched the Internet but couldn't find any solution.
Any insight would be greatly appreciated.

ICP 2.1.0.3 Install Timeout: FAILED - RETRYING: Waiting for kube-dns to start

Looks like issues is because of CNI (calico) but not sure what is the fix in ICP (see journalctl -u kubelet logs below)
ICP Installer Log:
FAILED! => {"attempts": 100, "changed": true, "cmd": "kubectl -n kube-system get daemonset kube-dns -o=custom-columns=A:.status.numberAvailable,B:.status.desiredNumberScheduled --no-headers=true | tr -s \" \" | awk '$1 == $2 {print \"READY\"}'", "delta": "0:00:00.403489", "end": "2018-07-08 09:04:21.922839", "rc": 0, "start": "2018-07-08 09:04:21.519350", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
journalctl -u kubelet:
Jul 08 22:40:38 dev-master hyperkube[2763]: E0708 22:40:38.548157 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: nodes is forbidden: User "kubelet" cannot list nodes at the cluster scope
Jul 08 22:40:38 dev-master hyperkube[2763]: E0708 22:40:38.549872 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "kubelet" cannot list pods at the cluster scope
Jul 08 22:40:38 dev-master hyperkube[2763]: E0708 22:40:38.555379 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: services is forbidden: User "kubelet" cannot list services at the cluster scope
Jul 08 22:40:38 dev-master hyperkube[2763]: E0708 22:40:38.738411 2763 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"k8s-master-10.50.50.201.153f85e7528e5906", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"k8s-master-10.50.50.201", UID:"b0ed63e50c3259666286e5a788d12b81", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{scheduler}"}, Reason:"Started", Message:"Started container", Source:v1.EventSource{Component:"kubelet", Host:"10.50.50.201"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbec8c296b58a5506, ext:106413065445, loc:(*time.Location)(0xb58e300)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbec8c296b58a5506, ext:106413065445, loc:(*time.Location)(0xb58e300)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "kubelet" cannot create events in the namespace "kube-system"' (will not retry!)
Jul 08 22:40:43 dev-master hyperkube[2763]: E0708 22:40:43.938806 2763 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 08 22:40:44 dev-master hyperkube[2763]: E0708 22:40:44.556337 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: nodes is forbidden: User "kubelet" cannot list nodes at the cluster scope
Jul 08 22:40:44 dev-master hyperkube[2763]: E0708 22:40:44.557513 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "kubelet" cannot list pods at the cluster scope
Jul 08 22:40:44 dev-master hyperkube[2763]: E0708 22:40:44.561007 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: services is forbidden: User "kubelet" cannot list services at the cluster scope
Jul 08 22:40:45 dev-master hyperkube[2763]: E0708 22:40:45.557584 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: nodes is forbidden: User "kubelet" cannot list nodes at the cluster scope
Jul 08 22:40:45 dev-master hyperkube[2763]: E0708 22:40:45.558375 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "kubelet" cannot list pods at the cluster scope
Jul 08 22:40:45 dev-master hyperkube[2763]: E0708 22:40:45.561807 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: services is forbidden: User "kubelet" cannot list services at the cluster scope
Jul 08 22:40:46 dev-master hyperkube[2763]: I0708 22:40:46.393905 2763 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach
Jul 08 22:40:46 dev-master hyperkube[2763]: I0708 22:40:46.396261 2763 kubelet_node_status.go:83] Attempting to register node 10.50.50.201
Jul 08 22:40:46 dev-master hyperkube[2763]: E0708 22:40:46.397540 2763 kubelet_node_status.go:107] Unable to register node "10.50.50.201" with API server: nodes is forbidden: User "kubelet" cannot create nodes at the cluster scope
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.161949 9689 cni.go:259] Error adding network: no configured Calico pools
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.161980 9689 cni.go:227] Error while adding to cni network: no configured Calico pools
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.468392 9689 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "kube-dns-splct_kube-system" network: no configured Calico
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.468455 9689 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-dns-splct_kube-system(113e64b2-82e6-11e8-83bb-0242a9e42805)" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.468479 9689 kuberuntime_manager.go:646] createPodSandbox for pod "kube-dns-splct_kube-system(113e64b2-82e6-11e8-83bb-0242a9e42805)" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.468556 9689 pod_workers.go:186] Error syncing pod 113e64b2-82e6-11e8-83bb-0242a9e42805 ("kube-dns-splct_kube-system(113e64b2-82e6-11e8-83bb-0242a9e42805)"), skipping: failed to "CreatePodSandbox" for "kube-d
Jul 08 19:43:48 dev-master hyperkube[9689]: I0708 19:43:48.938222 9689 kuberuntime_manager.go:513] Container {Name:calico-node Image:ibmcom/calico-node:v3.0.4 Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ETCD_ENDPOINTS Value: ValueFrom:&EnvVarSource
Jul 08 19:43:48 dev-master hyperkube[9689]: e:FELIX_HEALTHENABLED Value:true ValueFrom:nil} {Name:IP_AUTODETECTION_METHOD Value:can-reach=10.50.50.201 ValueFrom:nil}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:lib-modules ReadOnly:true MountPath:/lib/m
Jul 08 19:43:48 dev-master hyperkube[9689]: I0708 19:43:48.938449 9689 kuberuntime_manager.go:757] checking backoff for container "calico-node" in pod "calico-node-wpln7_kube-system(10107b3e-82e6-11e8-83bb-0242a9e42805)"
Jul 08 19:43:48 dev-master hyperkube[9689]: I0708 19:43:48.938699 9689 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=calico-node pod=calico-node-wpln7_kube-system(10107b3e-82e6-11e8-83bb-0242a9e42805)
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.938735 9689 pod_workers.go:186] Error syncing pod 10107b3e-82e6-11e8-83bb-0242a9e42805 ("calico-node-wpln7_kube-system(10107b3e-82e6-11e8-83bb-0242a9e42805)"), skipping: failed to "StartContainer" for "calic
lines 4918-4962/4962 (END)
docker ps (master node): Container-> k8s_POD_kube-dns-splct_kube-system-* is repeatedly crashing.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ed24d636fdd1 ibmcom/pause:3.0 "/pause" 1 second ago Up Less than a second k8s_POD_kube-dns-splct_kube-system_113e64b2-82e6-11e8-83bb-0242a9e42805_121
49b648837900 ibmcom/calico-cni "/install-cni.sh" 5 minutes ago Up 5 minutes k8s_install-cni_calico-node-wpln7_kube-system_10107b3e-82e6-11e8-83bb-0242a9e42805_0
933ff30177de ibmcom/calico-kube-controllers "/usr/bin/kube-contr…" 5 minutes ago Up 5 minutes k8s_calico-kube-controllers_calico-kube-controllers-759f7fc556-mm5tg_kube-system_1010712e-82e6-11e8-83bb-0242a9e42805_0
12e9262299af ibmcom/pause:3.0 "/pause" 6 minutes ago Up 5 minutes k8s_POD_calico-kube-controllers-759f7fc556-mm5tg_kube-system_1010712e-82e6-11e8-83bb-0242a9e42805_0
8dcb2b2b3eb5 ibmcom/pause:3.0 "/pause" 6 minutes ago Up 5 minutes k8s_POD_calico-node-wpln7_kube-system_10107b3e-82e6-11e8-83bb-0242a9e42805_0
9486ff78df31 ibmcom/tiller "/tiller" 6 minutes ago Up 6 minutes k8s_tiller_tiller-deploy-c59888d97-7nwph_kube-system_016019ab-82e6-11e8-83bb-0242a9e42805_0
e5588f68af1b ibmcom/pause:3.0 "/pause" 6 minutes ago Up 6 minutes k8s_POD_tiller-deploy-c59888d97-7nwph_kube-system_016019ab-82e6-11e8-83bb-0242a9e42805_0
e80460d857ff ibmcom/icp-image-manager "/icp-image-manager …" 10 minutes ago Up 10 minutes k8s_image-manager_image-manager-0_kube-system_7b7554ce-82e5-11e8-83bb-0242a9e42805_0
e207175f19b7 ibmcom/registry "/entrypoint.sh /etc…" 10 minutes ago Up 10 minutes k8s_icp-registry_image-manager-0_kube-system_7b7554ce-82e5-11e8-83bb-0242a9e42805_0
477faf0668f3 ibmcom/pause:3.0 "/pause" 10 minutes ago Up 10 minutes k8s_POD_image-manager-0_kube-system_7b7554ce-82e5-11e8-83bb-0242a9e42805_0
8996bb8c37b7 d4b6454d4873 "/hyperkube schedule…" 10 minutes ago Up 10 minutes k8s_scheduler_k8s-master-10.50.50.201_kube-system_9e5bce1f08c050be21fa6380e4e363cc_0
835ee941432c d4b6454d4873 "/hyperkube apiserve…" 10 minutes ago Up 10 minutes k8s_apiserver_k8s-master-10.50.50.201_kube-system_9e5bce1f08c050be21fa6380e4e363cc_0
de409ff63cb2 d4b6454d4873 "/hyperkube controll…" 10 minutes ago Up 10 minutes k8s_controller-manager_k8s-master-10.50.50.201_kube-system_9e5bce1f08c050be21fa6380e4e363cc_0
716032a308ea ibmcom/pause:3.0 "/pause" 10 minutes ago Up 10 minutes k8s_POD_k8s-master-10.50.50.201_kube-system_9e5bce1f08c050be21fa6380e4e363cc_0
bd9e64e3d6a2 d4b6454d4873 "/hyperkube proxy --…" 12 minutes ago Up 12 minutes k8s_proxy_k8s-proxy-10.50.50.201_kube-system_3e068267cfe8f990cd2c9a4635be044d_0
bab3c9ef7e40 ibmcom/pause:3.0 "/pause" 12 minutes ago Up 12 minutes k8s_POD_k8s-proxy-10.50.50.201_kube-system_3e068267cfe8f990cd2c9a4635be044d_0
Kubectl (master node): I believe kube should have been initialized and running by this time but seems like it is not.
kubectl get pods -s 127.0.0.1:8888 --all-namespaces
The connection to the server 127.0.0.1:8888 was refused - did you specify the right host or port?
Following are the options I tried:
Create cluster with both IP_IP enabled and disabled. As all
nodes are on same subnet, IP_IP setup should not have impact.
Etcd running on a separate node and as part of master node
ifconfig tunl0 returns following (i.e. w/o IP assignment) in all of the above scenarios :
tunl0 Link encap:IPIP Tunnel HWaddr
NOARP MTU:1480 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
'calicoctl get profile' returns empty and so does 'calicoctl get nodes' which I believe is because, calico is not configured yet.
Other checks, thoughts and options?
Calico Kube Contoller Logs (repeated):
2018-07-09 05:46:08.440 [WARNING][1] cache.go 278: Value for key has changed, queueing update to reprogram key="kns.default" type=v3.Profile
2018-07-09 05:46:08.440 [WARNING][1] cache.go 278: Value for key has changed, queueing update to reprogram key="kns.kube-public" type=v3.Profile
2018-07-09 05:46:08.440 [WARNING][1] cache.go 278: Value for key has changed, queueing update to reprogram key="kns.kube-system" type=v3.Profile
2018-07-09 05:46:08.440 [INFO][1] namespace_controller.go 223: Create/Update Profile in Calico datastore key="kns.default"
2018-07-09 05:46:08.441 [INFO][1] namespace_controller.go 246: Update Profile in Calico datastore with resource version key="kns.default"
2018-07-09 05:46:08.442 [INFO][1] namespace_controller.go 252: Successfully updated profile key="kns.default"
2018-07-09 05:46:08.442 [INFO][1] namespace_controller.go 223: Create/Update Profile in Calico datastore key="kns.kube-public"
2018-07-09 05:46:08.446 [INFO][1] namespace_controller.go 246: Update Profile in Calico datastore with resource version key="kns.kube-public"
2018-07-09 05:46:08.447 [INFO][1] namespace_controller.go 252: Successfully updated profile key="kns.kube-public"
2018-07-09 05:46:08.447 [INFO][1] namespace_controller.go 223: Create/Update Profile in Calico datastore key="kns.kube-system"
2018-07-09 05:46:08.465 [INFO][1] namespace_controller.go 246: Update Profile in Calico datastore with resource version key="kns.kube-system"
2018-07-09 05:46:08.476 [INFO][1] namespace_controller.go 252: Successfully updated profile key="kns.kube-system"
Firstly, from ICP 2.1.0.3, the insecure port 8888 for K8S apiserver is disabled, so you can not use this insecure port to talk to Kubenetes.
For this issue, could you let me know the below information or outputs.
The network configurations in your environment.
-> ifconfig -a
The route table:
-> route
The contents of your /ect/hosts file:
-> cat /etc/hosts
The ICP installation configurations files.
-> config.yaml & hosts
Well issue seemed to be with the Docker storage driver (btrfs) that I was using. Once I switched to 'Overlay', I was able to move forward.
I had the same experience, at least the same high level error message:
"FAILED - RETRYING: Waiting for kube-dns to start".
I had to do 2 things to pass this installation step:
change the hostname (and the entry in my /etc/hosts) to lowercase. Calico doesn't like uppercase
comment the localhost entry in /etc/hosts (#127.0.0.1 localhost.localdomain localhost)
After doing it, installation completed fine.

kubeadm init 1.9 hangs with vsphere cloud provider

kubeadm init seems to be hanging when I started using vsphere cloud provider. Followed instructions from here - Anybody got it working with 1.9?
root#master-0:~# kubeadm init --config /tmp/kube.yaml
[init] Using Kubernetes version: v1.9.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING Hostname]: hostname "master-0" could not be reached
[WARNING Hostname]: hostname "master-0" lookup master-0 on 8.8.8.8:53: no such host
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master-0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.11.0.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
Master os details
root#master-0:~# uname -r
4.4.0-21-generic
root#master-0:~# docker version
Client:
Version: 17.03.2-ce
API version: 1.27
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 03:35:14 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.2-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 03:35:14 2017
OS/Arch: linux/amd64
Experimental: false
root#master-0:~# cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04 LTS (Xenial Xerus)"
ID=ubuntu
kubelet service seems to be running fine
root#master-0:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf, 90-local-extras.conf
Active: active (running) since Mon 2018-01-22 11:25:00 UTC; 13min ago
Docs: http://kubernetes.io/docs/
Main PID: 4270 (kubelet)
Tasks: 13 (limit: 512)
Memory: 37.6M
CPU: 11.626s
CGroup: /system.slice/kubelet.service
└─4270 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeco
nfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10
--cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.cr
t --cadvisor-port=0 --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki
journalctl -f -u kubelet has some connection refused errors which probably networking service is missing. Those errors should go away when networking service is installed post kubeadm init
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.759764 1184 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.761350 1184 controller.go:114] kubelet confi
g controller: starting controller
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.762632 1184 controller.go:118] kubelet confi
g controller: validating combination of defaults and flags
Jan 22 11:17:46 localhost systemd[1]: Started Kubernetes systemd probe.
Jan 22 11:17:46 localhost kubelet[1184]: W0122 11:17:46.070619 1184 cni.go:171] Unable to update cni
config: No networks found in /etc/cni/net.d
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.081384 1184 server.go:182] Version: v1.9.1
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.081417 1184 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.082271 1184 plugins.go:101] No cloud provide
r specified.
Jan 22 11:17:46 localhost kubelet[1184]: error: failed to run Kubelet: unable to load bootstrap kubecon
fig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILU
RE
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Unit entered failed state.
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 22 11:17:56 localhost systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Jan 22 11:17:56 localhost systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jan 22 11:17:56 localhost systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.410883 1229 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.411198 1229 controller.go:114] kubelet confi
g controller: starting controller
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.411353 1229 controller.go:118] kubelet confi
g controller: validating combination of defaults and flags
Jan 22 11:17:56 localhost systemd[1]: Started Kubernetes systemd probe.
Jan 22 11:17:56 localhost kubelet[1229]: W0122 11:17:56.424264 1229 cni.go:171] Unable to update cni
config: No networks found in /etc/cni/net.d
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429102 1229 server.go:182] Version: v1.9.1
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429156 1229 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429247 1229 plugins.go:101] No cloud provide
r specified.
Jan 22 11:17:56 localhost kubelet[1229]: E0122 11:17:56.461608 1229 certificate_manager.go:314] Fail
ed while requesting a signed certificate from the master: cannot create certificate signing request: Po
st https://10.11.0.101:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: dial tcp 10.11
.0.101:6443: getsockopt: connection refused
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.491374 1229 server.go:428] --cgroups-per-qos
enabled, but --cgroup-root was not specified. defaulting to /
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.492069 1229 container_manager_linux.go:242]
container manager verified user specified cgroup-root exists: /
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.492102 1229 container_manager_linux.go:247]
Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: Kubelet
CgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootD
ir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemRe
servedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvict
ionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeri
od:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1
} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Pe
rcentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quan
tity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] Experiment
alCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
docker ps, controller & scheduler logs
root#master-0:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6db549891439 677911f7ae8f "kube-scheduler --..." About an hour ago Up About an hour k8s_kube-scheduler_kube-scheduler-master-0_kube-system_df32e281019039e73be77e3f53d09596_0
4f7ddefbd86e 4978f9a64966 "kube-controller-m..." About an hour ago Up About an hour k8s_kube-controller-manager_kube-controller-manager-master-0_kube-system_34bad395be69e74db6304d6c4218c536_0
18604db89db6 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-scheduler-master-0_kube-system_df32e281019039e73be77e3f53d09596_0
252b86ea4b5e gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-controller-manager-master-0_kube-system_34bad395be69e74db6304d6c4218c536_0
4021061bf8a6 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-apiserver-master-0_kube-system_7a3ae9279d0ca7b4ada8333fbe7442ed_0
4f94163d313b gcr.io/google_containers/etcd-amd64:3.1.10 "etcd --name=etcd0..." About an hour ago Up About an hour 0.0.0.0:2379-2380->2379-2380/tcp, 0.0.0.0:4001->4001/tcp, 7001/tcp etcd
root#master-0:~# docker logs -f 4f7ddefbd86e
I0122 11:25:06.253706 1 controllermanager.go:108] Version: v1.9.1
I0122 11:25:06.258712 1 leaderelection.go:174] attempting to acquire leader lease...
E0122 11:25:06.259448 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:09.711377 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:13.969270 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:17.564964 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:20.616174 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
root#master-0:~# docker logs -f 6db549891439
W0122 11:25:06.285765 1 server.go:159] WARNING: all flags than --config are deprecated. Please begin using a config file ASAP.
I0122 11:25:06.292865 1 server.go:551] Version: v1.9.1
I0122 11:25:06.295776 1 server.go:570] starting healthz server on 127.0.0.1:10251
E0122 11:25:06.295947 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: Get https://10.11.0.101:6443/apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296027 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: Get https://10.11.0.101:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296092 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: Get https://10.11.0.101:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296160 1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: Get https://10.11.0.101:6443/api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296218 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: Get https://10.11.0.101:6443/apis/apps/v1beta1/statefulsets?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.297374 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: Get https://10.11.0.101:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
There was a bug in the controller manager when starting with the vsphere cloud provider. See https://github.com/kubernetes/kubernetes/issues/57279, fixed in 1.9.2

kubelet restarting randomly

We have a 2 nodes cluster running on GKE and for a long time we have been suffering random downtimes due what appears as kubelet service random restarts.
These are the logs for the last downtime in the node that was running the kubernetes system pods at that time:
sudo journalctl -r -u kubelet
Aug 22 04:17:36 gke-app-node1 systemd[1]: Started Kubernetes kubelet.
Aug 22 04:17:36 gke-app-node1 systemd[1]: Stopped Kubernetes kubelet.
Aug 22 04:17:36 gke-app-node1 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Aug 22 04:10:46 gke-app-node1 systemd[1]: Started Kubernetes kubelet.
Aug 22 04:10:46 gke-app-node1 systemd[1]: Stopped Kubernetes kubelet.
Aug 22 04:10:46 gke-app-node1 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Aug 22 04:09:48 gke-app-node1 systemd[1]: Started Kubernetes kubelet.
Aug 22 04:09:46 gke-app-node1 systemd[1]: Stopped Kubernetes kubelet.
Aug 22 04:09:44 gke-app-node1 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Aug 22 04:02:05 gke-app-node1 systemd[1]: Started Kubernetes kubelet.
Aug 22 04:02:03 gke-app-node1 systemd[1]: Stopped Kubernetes kubelet.
Aug 22 04:02:03 gke-app-node1 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Aug 22 04:01:09 gke-app-node1 systemd[1]: Started Kubernetes kubelet.
Aug 22 04:01:09 gke-app-node1 systemd[1]: Stopped Kubernetes kubelet.
Aug 22 04:01:08 gke-app-node1 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Aug 22 04:00:58 gke-app-node1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Aug 22 04:00:58 gke-app-node1 systemd[1]: kubelet.service: Unit entered failed state.
Aug 22 04:00:58 gke-app-node1 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Aug 22 04:00:58 gke-app-node1 kubelet[1330]: I0822 04:00:58.104306 1330 server.go:794] GET /healthz: (5.286840082s) 200 [[curl/7.51.0] 127.0.0.1:35924]
Aug 22 04:00:58 gke-app-node1 kubelet[1330]: I0822 04:00:57.981923 1330 docker_server.go:73] Stop docker server
Aug 22 04:00:58 gke-app-node1 kubelet[1330]: I0822 04:00:53.991354 1330 server.go:794] GET /healthz: (5.834296581s) 200 [[Go-http-client/1.1] 127.0.0.1:35926]
Aug 22 04:00:57 gke-app-node1 kubelet[1330]: I0822 04:00:42.636036 1330 fsHandler.go:131] du and find on following dirs took 16.466105259s: [/var/lib/docker/overlay/e496194dfcb8a053050a0eb73965f57b109fe3036c1ffc5b0f12b4a341f13794 /var/lib/docker/containers/b6a212aedf588a1f1d173fd9f4871f678d014e260e8aa6147ad8212619675802]
Aug 22 04:00:39 gke-app-node1 kubelet[1330]: I0822 04:00:38.061492 1330 fsHandler.go:131] du and find on following dirs took 12.246559762s: [/var/lib/docker/overlay/303dc4c5814a0a12a6ac450e5b27327f55a7baa8000c011bd38521f3ff997e0f /var/lib/docker/containers/18a95beaf86b382bb8abc6ee40033020de1da4b54a5ca52e1c61bf7f14d6ef44]
Aug 22 04:00:39 gke-app-node1 kubelet[1330]: I0822 04:00:38.476930 1330 fsHandler.go:131] du and find on following dirs took 11.766408434s: [/var/lib/docker/overlay/86802dda255243388ab86fa8fc403187193f8c4ccdee54d8ca18c021ca35bc36 /var/lib/docker/containers/7fd4d507ec6445035fcb4a60efd4ae68e54052c1cace3268be72954062fed830]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:35.865293 1330 prober.go:106] Readiness probe for "web-deployment-812924635-ntcqw_default(bcf76fb6-8661-11e7-88da-42010a840211):rails-app" failed (failure): Get http://10.48.1.7:80/health_check: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: W0822 04:00:35.380953 1330 prober.go:98] No ref for container "docker://e2dcca90c091d2789af9b22e1405cb273f63c399aecde2686ef4b1e8ab9fdc5f" (web-deployment-812924635-ntcqw_default(bcf76fb6-8661-11e7-88da-42010a840211):rails-app)
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:32.514573 1330 fsHandler.go:131] du and find on following dirs took 7.127181023s: [/var/lib/docker/overlay/647f419bce585a3d0f5792376b269704cb358828bc5c4fb5e815bfa23950d511 /var/lib/docker/containers/59f7ada601f38a243daa7154f2ed27790d14d163c4675e26186d9a6d9db0272e]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:32.255644 1330 fsHandler.go:131] du and find on following dirs took 6.72357089s: [/var/lib/docker/overlay/992a65b68531c5ac53e4cd06f7a8f8abe4b908d943b5b9cc38da126b469050b2 /var/lib/docker/containers/2be7aede380d6f3452a5abacc53f9e0a69f8c5ee3dbdf5351a30effdf2d47833]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:32.067601 1330 fsHandler.go:131] du and find on following dirs took 6.511405985s: [/var/lib/docker/overlay/7bc4e00d232b4a22eb64a87ad079970aabb24bde17d3adaa6989840ebc91b96c /var/lib/docker/containers/949c778861a4f86440c5dd21d4daf40e97fb49b9eb1498111d7941ca3e63541a]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:31.907928 1330 fsHandler.go:131] du and find on following dirs took 6.263993478s: [/var/lib/docker/overlay/303abc540335a9ce7077fd21182845fbff2f06ed9eb1ac8af9effdfd048153b5 /var/lib/docker/containers/6544add2796f365d67d72fe283e083042aa2af82862eb6335295d228efa28d61]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:31.907845 1330 fsHandler.go:131] du and find on following dirs took 7.630063774s: [/var/lib/docker/overlay/a36c376a7ddd04c168822770866d8c34499ddec7e4039ada579b3d65adc57347 /var/lib/docker/containers/6a606a6c901f8373dff22f94ba77a24956a7b4eed3d0e550be168eeaeed86236]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:31.902731 1330 fsHandler.go:131] du and find on following dirs took 6.259025553s: [/var/lib/docker/overlay/0a7170e1a42bfa8b0112d8c7bb805da8e4778aa5ce90978d90ed5335929633ff /var/lib/docker/containers/1f68eaa59cab0a0bcdc087e25d18573044b599967a56867d189acd82bc19172b]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:31.871796 1330 fsHandler.go:131] du and find on following dirs took 6.410999589s: [/var/lib/docker/overlay/25ffbf8bd71e814af8991cc52499286d2d345b3f348fec9358ca366f341ed588 /var/lib/docker/containers/efe1969587c9b0412fe7f7c8c24bbe1326d46f576bddf12f88ae7cd406b6475d]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:31.871699 1330 fsHandler.go:131] du and find on following dirs took 6.259940483s: [/var/lib/docker/overlay/56909c00ec20b59c1fcb4988cd51fe50ebb467681f37bab3f9061d76993565bc /var/lib/docker/containers/a8d1df672c23313313b511389f6eeb44e78c3f9e4c346d214fc190695f270e5f]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:31.614518 1330 fsHandler.go:131] du and find on following dirs took 5.982313751s: [/var/lib/docker/overlay/cb057acc4f3a3e91470847f78ffd550b25a24605cec42ee080aaf193933968cf /var/lib/docker/containers/e755c4d88e4e5d4d074806e829b1e83fd52c8e2b1c01c27131222a40b0c6c10a]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:31.837000 1330 fsHandler.go:131] du and find on following dirs took 7.500602734s: [/var/lib/docker/overlay/e9539d9569ccdcc79db1cd4add7036d70ad71391dc30ca16903bdd9bda4d0972 /var/lib/docker/containers/b0a7c955af1ed85f56aeaed1d787794d5ffd04c2a81820465a1e3453242c8a19]
Aug 22 04:00:34 gke-app-node1 kubelet[1330]: I0822 04:00:31.836947 1330 fsHandler.go:131] du and find on following dirs took 6.257091389s: [/var/lib/docker/overlay/200f0f063157381d25001350c34914e020ea16b3f82f7bedf7e4b01d34e513a7 /var/lib/docker/containers/eca7504b7e24332381e459a2f09acc150a5681c148cebc5867ac66021cbe0435]
Aug 22 04:00:33 gke-app-node1 kubelet[1330]: I0822 04:00:31.836787 1330 fsHandler.go:131] du and find on following dirs took 7.286756684s: [/var/lib/docker/overlay/37334712f505b11c7f0b27fb0580eadc0e79fc789dcfafbea1730efd500fb69c /var/lib/docker/containers/4858388c53032331868497859110a7267fef95110a7ab3664aa857a21ee02a3e]
Aug 22 04:00:22 gke-app-node1 kubelet[1330]: I0822 04:00:21.999930 1330 qos_container_manager_linux.go:286] [ContainerManager]: Updated QoS cgroup configuration
Aug 22 04:00:20 gke-app-node1 kubelet[1330]: I0822 04:00:19.598974 1330 server.go:794] GET /healthz: (136.991429ms) 200 [[curl/7.51.0] 127.0.0.1:35888]
Aug 22 04:00:10 gke-app-node1 kubelet[1330]: I0822 04:00:08.024328 1330 server.go:794] GET /healthz: (36.191534ms) 200 [[curl/7.51.0] 127.0.0.1:35868]
Aug 22 04:00:05 gke-app-node1 kubelet[1330]: I0822 04:00:05.861339 1330 server.go:794] GET /stats/summary/: (808.201834ms) 200 [[Go-http-client/1.1] 10.48.0.7:43022]
Aug 22 04:00:03 gke-app-node1 kubelet[1330]: W0822 04:00:02.723586 1330 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/kube-logrotate.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/kube-logrotate.service: no such file or directory
Aug 22 04:00:03 gke-app-node1 kubelet[1330]: W0822 04:00:02.723529 1330 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/kube-logrotate.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/kube-logrotate.service: no such file or directory
Aug 22 04:00:03 gke-app-node1 kubelet[1330]: W0822 04:00:02.622765 1330 raw.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/kube-logrotate.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/kube-logrotate.service: no such file or directory
sudo dmesg -T
[Tue Aug 22 04:17:31 2017] cbr0: port 4(vethd18b0189) entered disabled state
[Tue Aug 22 04:17:31 2017] device vethd18b0189 left promiscuous mode
[Tue Aug 22 04:17:31 2017] cbr0: port 4(vethd18b0189) entered disabled state
[Tue Aug 22 04:17:40 2017] cbr0: port 6(veth2985149d) entered disabled state
[Tue Aug 22 04:17:40 2017] device veth2985149d left promiscuous mode
[Tue Aug 22 04:17:40 2017] cbr0: port 6(veth2985149d) entered disabled state
[Tue Aug 22 04:17:42 2017] cbr0: port 5(veth2a1d2827) entered disabled state
[Tue Aug 22 04:17:42 2017] device veth2a1d2827 left promiscuous mode
[Tue Aug 22 04:17:42 2017] cbr0: port 5(veth2a1d2827) entered disabled state
[Tue Aug 22 04:17:42 2017] cbr0: port 2(vetha070fbca) entered disabled state
[Tue Aug 22 04:17:42 2017] device vetha070fbca left promiscuous mode
[Tue Aug 22 04:17:42 2017] cbr0: port 2(vetha070fbca) entered disabled state
[Tue Aug 22 04:17:42 2017] cbr0: port 3(veth7e3e663a) entered disabled state
[Tue Aug 22 04:17:42 2017] device veth7e3e663a left promiscuous mode
[Tue Aug 22 04:17:42 2017] cbr0: port 3(veth7e3e663a) entered disabled state
[Tue Aug 22 04:17:57 2017] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[Tue Aug 22 04:17:57 2017] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[Tue Aug 22 04:17:57 2017] device veth215e85fc entered promiscuous mode
[Tue Aug 22 04:17:57 2017] cbr0: port 1(veth215e85fc) entered forwarding state
[Tue Aug 22 04:17:57 2017] cbr0: port 1(veth215e85fc) entered forwarding state
[Tue Aug 22 04:18:12 2017] cbr0: port 1(veth215e85fc) entered forwarding state
And finally here we can see how the Kubernetes pods were killed around that time:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d7b59cd7116c gcr.io/google_containers/pause-amd64:3.0 "/pause" 4 hours ago Up 4 hours k8s_POD_worker-deployment-3042507874-ghjpd_default_1de2291b-86ef-11e7-88da-42010a840211_0
e6f2563ac3c8 5e65193af899 "/monitor --component" 4 hours ago Up 4 hours k8s_prometheus-to-sd-exporter_fluentd-gcp-v2.0-5vlrc_kube-system_6cebabe5-84f9-11e7-88da-42010a840211_1
45d7886d0308 487e99ee05d9 "/bin/sh -c '/run.sh " 4 hours ago Up 4 hours k8s_fluentd-gcp_fluentd-gcp-v2.0-5vlrc_kube-system_6cebabe5-84f9-11e7-88da-42010a840211_1
b5a5c5d085ac gcr.io/google_containers/pause-amd64:3.0 "/pause" 4 hours ago Up 4 hours k8s_POD_fluentd-gcp-v2.0-5vlrc_kube-system_6cebabe5-84f9-11e7-88da-42010a840211_1
32dcd4d5847c 54d2a8698e3c "/bin/sh -c 'echo -99" 4 hours ago Up 4 hours k8s_kube-proxy_kube-proxy-gke-app-node1_kube-system_ed40100d42c9e285fa1f59ca7a1d8f8d_1
2d055d96b610 gcr.io/google_containers/pause-amd64:3.0 "/pause" 4 hours ago Up 4 hours k8s_POD_kube-proxy-gke-app-node1_kube-system_ed40100d42c9e285fa1f59ca7a1d8f8d_1
2be7aede380d 5e65193af899 "/monitor --component" 2 days ago Exited (0) 4 hours ago k8s_prometheus-to-sd-exporter_fluentd-gcp-v2.0-5vlrc_kube-system_6cebabe5-84f9-11e7-88da-42010a840211_0
7fd4d507ec64 54d2a8698e3c "/bin/sh -c 'echo -99" 2 days ago Exited (0) 4 hours ago k8s_kube-proxy_kube-proxy-gke-app-node1_kube-system_ed40100d42c9e285fa1f59ca7a1d8f8d_0
cc615ec1e87c efe10ee6727f "/bin/touch /run/xtab" 2 days ago Exited (0) 2 days ago k8s_touch-lock_kube-proxy-gke-app-node1_kube-system_ed40100d42c9e285fa1f59ca7a1d8f8d_0
b0a7c955af1e 487e99ee05d9 "/bin/sh -c '/run.sh " 2 days ago Exited (0) 4 hours ago k8s_fluentd-gcp_fluentd-gcp-v2.0-5vlrc_kube-system_6cebabe5-84f9-11e7-88da-42010a840211_0
4858388c5303 gcr.io/google_containers/pause-amd64:3.0 "/pause" 2 days ago Exited (0) 4 hours ago k8s_POD_fluentd-gcp-v2.0-5vlrc_kube-system_6cebabe5-84f9-11e7-88da-42010a840211_0
6a606a6c901f gcr.io/google_containers/pause-amd64:3.0 "/pause" 2 days ago Exited (0) 4 hours ago k8s_POD_kube-proxy-gke-app-node1_kube-system_ed40100d42c9e285fa1f59ca7a1d8f8d_0
Our cluster is running kubernetes 1.7.3 both for master and the node pools, and it's on GKE (europe-west1-d zone).
Any help would be appreciated as we don't really know how to debug this problem further.

Failed to start Kubernetes API Server duo to unknown reason

the service is not starting and the listener is not activated on port 8080.
here is my kubernetes configuration:
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://centos-master:8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"
systemctl status kube-apiserver -l
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Mon 2017-08-14 12:07:04 +0430; 29s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 2087 ExecStart=/usr/bin/kube-apiserver $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBE_ETCD_SERVERS $KUBE_API_ADDRESS $KUBE_API_PORT $KUBELET_PORT $KUBE_ALLOW_PRIV $KUBE_SERVICE_ADDRESSES $KUBE_ADMISSION_CONTROL $KUBE_API_ARGS (code=exited, status=2)
Main PID: 2087 (code=exited, status=2)
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service: main process exited, code=exited, status=2/INVALIDARGUMENT
Aug 14 12:07:04 centos-master systemd[1]: Failed to start Kubernetes API Server.
Aug 14 12:07:04 centos-master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service failed.
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service holdoff time over, scheduling restart.
Aug 14 12:07:04 centos-master systemd[1]: start request repeated too quickly for kube-apiserver.service
Aug 14 12:07:04 centos-master systemd[1]: Failed to start Kubernetes API Server.
Aug 14 12:07:04 centos-master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service failed.
tail -n 1000 /var/log/messages
resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.240160 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466: Failed to list *api.PersistentVolume: Get http://centos-master:8080/api/v1/persistentvolumes?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.242039 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:470: Failed to list *api.Service: Get http://centos-master:8080/api/v1/services?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.242924 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:457: Failed to list *api.Pod: Get http://centos-master:8080/api/v1/pods?fieldSelector=spec.nodeName%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.269386 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:473: Failed to list *api.ReplicationController: Get http://centos-master:8080/api/v1/replicationcontrollers?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.285782 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481: Failed to list *extensions.ReplicaSet: Get http://centos-master:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.286529 606 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.PersistentVolumeClaim: Get http://centos-master:8080/api/v1/persistentvolumeclaims?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
systemd[1]: kube-apiserver.service: main process exited, code=exited, status=2/INVALIDARGUMENT
The arguments you're using do not seem valid.
Check the list of valid arguments here.
You can also follow the Kubernetes The Hard Way guide for a trusted way to run the API server.