I am trying to setup Kubernetes on a Raspberry Pi cluster. After installing Docker and Kubernetes, I tried to initialized the master node with the command:
sudo kubeadm init --apiserver-advertise-address=192.168.22.10
(192.168.22.10 is the IP address of this Raspberry Pi)
And I came across the error
error http://thyrsi.com/t6/660/1548094273x2890171671.png
The Docker version is 18.09. The system is Raspbian Lite(2018-11-15). The Kubernetes version is 1.13.2.
I tried the command:
systemctl status kubelet
Then I got the following contents:
* kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
`-10-kubeadm.conf
Active: active (running) since Tue 2019-01-22 01:17:57 CST; 7min ago
Docs: https://kubernetes.io/docs/home/
Main PID: 1091 (kubelet)
Memory: 31.9M
CPU: 1min 7.370s
CGroup: /system.slice/kubelet.service
`-1091 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.1
Jan 22 01:25:16 raspberrypi6 kubelet[1091]: I0122 01:25:16.563963 1091 kubelet_node_status.go:278] Setting node annotation to enable volume controller attach/detach
Jan 22 01:25:16 raspberrypi6 kubelet[1091]: E0122 01:25:16.581094 1091 pod_workers.go:190] Error syncing pod b5725949b6a8661a393eba83efb9c2e0 ("kube-controller-manager-raspberrypi6_kube-system(b5725949b6a8661a393eba83efb9c2e0)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-raspberrypi6_kube-system(b5725949b6a8661a393eba83efb9c2e0)"
Jan 22 01:25:16 raspberrypi6 kubelet[1091]: E0122 01:25:16.592811 1091 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:453: Failed to list *v1.Node: Get https://192.168.22.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Draspberrypi6&limit=500&resourceVersion=0: dial tcp 192.168.22.10:6443: connect: connection refused
Jan 22 01:25:16 raspberrypi6 kubelet[1091]: E0122 01:25:16.601066 1091 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168.22.10:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.22.10:6443: connect: connection refused
Jan 22 01:25:16 raspberrypi6 kubelet[1091]: E0122 01:25:16.621853 1091 kubelet.go:2266] node "raspberrypi6" not found
Jan 22 01:25:16 raspberrypi6 kubelet[1091]: E0122 01:25:16.654152 1091 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.22.10:6443/api/v1/pods?fieldSelector=spec.nodeName%3Draspberrypi6&limit=500&resourceVersion=0: dial tcp 192.168.22.10:6443: connect: connection refused
Jan 22 01:25:16 raspberrypi6 kubelet[1091]: E0122 01:25:16.722216 1091 kubelet.go:2266] node "raspberrypi6" not found
Jan 22 01:25:16 raspberrypi6 kubelet[1091]: E0122 01:25:16.822567 1091 kubelet.go:2266] node "raspberrypi6" not found
Jan 22 01:25:16 raspberrypi6 kubelet[1091]: E0122 01:25:16.922899 1091 kubelet.go:2266] node "raspberrypi6" not found
Jan 22 01:25:17 raspberrypi6 kubelet[1091]: E0122 01:25:17.023216 1091 kubelet.go:2266] node "raspberrypi6" not found
I have searched the Internet but couldn't find any solution.
Any insight would be greatly appreciated.
Related
When i command
journalctl -f
I can get below logs.
Jan 20 16:28:49 node1 kubelet[1237]: I0120 16:28:49.858522 1237 scope.go:110] "RemoveContainer" containerID="6ff68682a6151eaecce82b16ca6bbc23ce44e71aedd871e5816dec1989a6ac7d"
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.859688275+09:00" level=info msg="RemoveContainer for \"6ff68682a6151eaecce82b16ca6bbc23ce44e71aedd871e5816dec1989a6ac7d\""
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.866650422+09:00" level=info msg="RemoveContainer for \"6ff68682a6151eaecce82b16ca6bbc23ce44e71aedd871e5816dec1989a6ac7d\" returns successfully"
Jan 20 16:28:49 node1 kubelet[1237]: I0120 16:28:49.866961 1237 scope.go:110] "RemoveContainer" containerID="f205217c8ed1ca6303d9035e95584af96708d07600887d2b4254d1080389dfbd"
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.868036395+09:00" level=info msg="RemoveContainer for \"f205217c8ed1ca6303d9035e95584af96708d07600887d2b4254d1080389dfbd\""
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.872289374+09:00" level=info msg="RemoveContainer for \"f205217c8ed1ca6303d9035e95584af96708d07600887d2b4254d1080389dfbd\" returns successfully"
Jan 20 16:28:49 node1 kubelet[1237]: I0120 16:28:49.872457 1237 scope.go:110] "RemoveContainer" containerID="b517943a97621ec70c3bbf95d4e6caa9c109ceba19eb013abdfeb252682db634"
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.873342572+09:00" level=info msg="RemoveContainer for \"b517943a97621ec70c3bbf95d4e6caa9c109ceba19eb013abdfeb252682db634\""
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.877366463+09:00" level=info msg="RemoveContainer for \"b517943a97621ec70c3bbf95d4e6caa9c109ceba19eb013abdfeb252682db634\" returns successfully"
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.879261150+09:00" level=info msg="StopPodSandbox for \"b08cbdab3744ff66a176a26643e59ec6b925082af7802dc9ea8dea29b6695331\""
However, when i command with u option
journalctl -u kubelet -f
I can't get recent logs.
I just can get logs 1 days before
Jan 19 03:01:32 nodekubelet[1237]: I0119 03:01:32.192834 1237 event.go:294] "Event occurred" object="kube-system/nginx-proxy-node" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="SandboxChanged" message="Pod sandbox changed, it will be killed and re-created."
Jan 19 03:01:32 nodekubelet[1237]: I0119 03:01:32.212913 1237 reflector.go:255] Listing and watching *v1.Service from vendor/k8s.io/client-go/informers/factory.go:134
Jan 19 03:01:32 nodekubelet[1237]: E0119 03:01:32.213013 1237 kubelet.go:2424] "Error getting node" err="node \"node\" not found"
Jan 19 03:01:32 nodekubelet[1237]: W0119 03:01:32.213540 1237 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://localhost:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
Jan 19 03:01:32 nodekubelet[1237]: E0119 03:01:32.213598 1237 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://localhost:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
Why can't i get entire logs(journalctl -f) about kubelet?
What is difference between two commands?
Aug 01 12:49:49 master kubelet[18344]: E0801 12:49:49.534129 18344 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "master" not found
Aug 01 12:49:49 master kubelet[18344]: I0801 12:49:49.925152 18344 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 01 12:49:49 master kubelet[18344]: I0801 12:49:49.927988 18344 kubelet_node_status.go:79] Attempting to register node master
Aug 01 12:49:49 master kubelet[18344]: E0801 12:49:49.928908 18344 kubelet_node_status.go:103] Unable to register node "master" with API server: Post https://192.168.0.33:6443/api/v1/nodes: dial tcp 192.168.
Aug 01 12:49:50 master kubelet[18344]: E0801 12:49:50.004760 18344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.0.33:6443/api/v1/nodes?fieldSel
Aug 01 12:49:50 master kubelet[18344]: E0801 12:49:50.006130 18344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.0.33:6443/api/v1/services?li
Aug 01 12:49:50 master kubelet[18344]: E0801 12:49:50.008020 18344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.0.33:6443/api/v1/pods?fi
Aug 01 12:49:50 master kubelet[18344]: I0801 12:49:50.729713 18344 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 01 12:49:50 master kubelet[18344]: I0801 12:49:50.733513 18344 kubelet_node_status.go:79] Attempting to register node master
Aug 01 12:49:50 master kubelet[18344]: E0801 12:49:50.734866 18344 kubelet_node_status.go:103] Unable to register node "master" with API server: Post https://192.168.0.33:6443/api/v1/nodes: dial tcp 192.168.
Aug 01 12:49:51 master kubelet[18344]: E0801 12:49:51.006313 18344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.0.33:6443/api/v1/nodes?fieldSel
Aug 01 12:49:51 master kubelet[18344]: E0801 12:49:51.009443 18344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.0.33:6443/api/v1/services?li
Aug 01 12:49:51 master kubelet[18344]: E0801 12:49:51.010510 18344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.0.33:6443/api/v1/pods?fi
lines 22914-22948/22948
How can I diagnose this further?
telnet 192.168.0.33 6443
Trying 192.168.0.33...
telnet: Unable to connect to remote host: Connection refused
systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Wed 2018-08-01 12:49:48 EDT; 3min 47s ago
Docs: http://kubernetes.io/docs/
Main PID: 18344 (kubelet)
Tasks: 13 (limit: 4915)
Memory: 39.4M
CPU: 4.091s
CGroup: /system.slice/kubelet.service
└─18344 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cn
Aug 01 12:53:33 master kubelet[18344]: E0801 12:53:33.522282 18344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.0.33:6443/api/v1/services?limit=500&resourceVersion=0
Aug 01 12:53:33 master kubelet[18344]: E0801 12:53:33.527787 18344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmas
Aug 01 12:53:33 master kubelet[18344]: E0801 12:53:33.537549 18344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.0.33:6443/api/v1/pods?fieldSelector=spec.nodeName
Aug 01 12:53:34 master kubelet[18344]: I0801 12:53:34.051830 18344 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 01 12:53:34 master kubelet[18344]: E0801 12:53:34.523429 18344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.0.33:6443/api/v1/services?limit=500&resourceVersion=0
Aug 01 12:53:34 master kubelet[18344]: E0801 12:53:34.530208 18344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.0.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmas
Aug 01 12:53:34 master kubelet[18344]: E0801 12:53:34.538744 18344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.0.33:6443/api/v1/pods?fieldSelector=spec.nodeName
Aug 01 12:53:35 master kubelet[18344]: E0801 12:53:35.524380 18344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://192.168.0.33:6443/api/v1/services?limit=500&resourceVersion=0
Aug 01 12:53:35 master kubelet[18344]: E0801 12:53:35.531218 18344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://192.168.0.33:6443/api/v1/nodes?fieldSelecto
Think I broke something...
I managed to find the issue by looking at the logs of the apiserver. In my particular case I found them via the below:
osboxes#master:/var/log/pods$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
84ec4c4de5b2 k8s.gcr.io/pause:3.1 "/pause" About an hour ago Up About an hour k8s_POD_kube-apiserver-master_kube-system_fdb932ada5768a1891d839f8cf2306a9_0
ea84510e26be 272b3a60cd68 "kube-scheduler --..." About an hour ago Up About an hour k8s_kube-scheduler_kube-scheduler-master_kube-system_537879acc30dd5eff5497cb2720a6d64_1
d36af3896c3b 52096ee87d0e "kube-controller-m..." About an hour ago Up About an hour k8s_kube-controller-manager_kube-controller-manager-master_kube-system_f31b9af1b177e27c1d4ace1fa4d37d83_1
a53569da6f29 f0fad859c909 "/opt/bin/flanneld..." 4 hours ago Up 4 hours k8s_kube-flannel_kube-flannel-ds-94xm7_kube-system_7e32bed6-9585-11e8-a2f7-080027a08edc_0
9a31712003ac k8s.gcr.io/pause:3.1 "/pause" 4 hours ago Up 4 hours k8s_POD_kube-flannel-ds-94xm7_kube-system_7e32bed6-9585-11e8-a2f7-080027a08edc_0
276b6107a4b3 d5c25579d0ff "/usr/local/bin/ku..." 4 hours ago Up 4 hours k8s_kube-proxy_kube-proxy-2hcfx_kube-system_e7bebd1e-9584-11e8-a2f7-080027a08edc_0
87d2c5657240 k8s.gcr.io/pause:3.1 "/pause" 4 hours ago Up 4 hours k8s_POD_kube-proxy-2hcfx_kube-system_e7bebd1e-9584-11e8-a2f7-080027a08edc_0
d04c7669f27c b8df3b177be2 "etcd --advertise-..." 4 hours ago Up 4 hours k8s_etcd_etcd-master_kube-system_d3a295b6d0da8bbfe30c134cab4d030b_0
6c174ea2f877 k8s.gcr.io/pause:3.1 "/pause" 4 hours ago Up 4 hours k8s_POD_kube-scheduler-master_kube-system_537879acc30dd5eff5497cb2720a6d64_0
e5603c531a1c k8s.gcr.io/pause:3.1 "/pause" 4 hours ago Up 4 hours k8s_POD_kube-controller-manager-master_kube-system_f31b9af1b177e27c1d4ace1fa4d37d83_0
cf6ee3a78089 k8s.gcr.io/pause:3.1 "/pause" 4 hours ago Up 4 hours k8s_POD_etcd-master_kube-system_d3a295b6d0da8bbfe30c134cab4d030b_0
osboxes#master:/var/log/pods$ tail -f fdb932ada5768a1891d839f8cf2306a9/kube-apiserver/29.log
tail: cannot open 'fdb932ada5768a1891d839f8cf2306a9/kube-apiserver/29.log' for reading: Permission denied
tail: no files remaining
osboxes#master:/var/log/pods$ tail -f fdb932ada5768a1891d839f8cf2306a9/kube-apiserver/29.log
tail: cannot open 'fdb932ada5768a1891d839f8cf2306a9/kube-apiserver/29.log' for reading: Permission denied
tail: no files remaining
osboxes#master:/var/log/pods$ sudo tail -f fdb932ada5768a1891d839f8cf2306a9/kube-apiserver/29.log
{"log":" --tls-private-key-file string File containing the default x509 private key matching --tls-cert-file.\n","stream":"stderr","time":"2018-08-01T16:56:17.315279517Z"}
{"log":" --tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: \"example.crt,example.key\" or \"foo.crt,foo.key:*.foo.com,foo.com\". (default [])\n","stream":"stderr","time":"2018-08-01T16:56:17.315282242Z"}
{"log":" --token-auth-file string If set, the file that will be used to secure the secure port of the API server via token authentication.\n","stream":"stderr","time":"2018-08-01T16:56:17.31528618Z"}
{"log":" -v, --v Level log level for V logs\n","stream":"stderr","time":"2018-08-01T16:56:17.315288985Z"}
{"log":" --version version[=true] Print version information and quit\n","stream":"stderr","time":"2018-08-01T16:56:17.31529159Z"}
{"log":" --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging\n","stream":"stderr","time":"2018-08-01T16:56:17.315296449Z"}
{"log":" --watch-cache Enable watch caching in the apiserver (default true)\n","stream":"stderr","time":"2018-08-01T16:56:17.315299265Z"}
{"log":" --watch-cache-sizes strings List of watch cache sizes for every resource (pods, nodes, etc.), comma separated. The individual override format: resource[.group]#size, where resource is lowercase plural (no version), group is optional, and size is a number. It takes effect when watch-cache is enabled. Some resources (replicationcontrollers, endpoints, nodes, pods, services, apiservices.apiregistration.k8s.io) have system defaults set by heuristics, others default to default-watch-cache-size\n","stream":"stderr","time":"2018-08-01T16:56:17.31530237Z"}
{"log":"\n","stream":"stderr","time":"2018-08-01T16:56:17.315306007Z"}
{"log":"error: loading audit policy file: failed to read file path \"/etc/kubernetes/audit.yaml\": open /etc/kubernetes/audit.yaml: no such file or directory\n","stream":"stderr","time":"2018-08-01T16:56:17.315480703Z"}
q^C
Ubuntu 16.04 LTS, Docker 17.12.1, Kubernetes 1.10.0
Kubelet not starting:
Jun 22 06:45:57 dev-master systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Jun 22 06:45:57 dev-master systemd[1]: kubelet.service: Failed with result 'exit-code'.
Note: No issue with v1.9.1
LOGS:
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.518085 20051 docker_service.go:249] Docker Info: &{ID:WDJK:3BCI:BGCM:VNF3:SXGW:XO5G:KJ3Z:EKIH:XGP7:XJGG:LFBL:YWAJ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:btrfs DriverStatus:[[Build Version Btrfs v4.15.1] [Library Vers
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.521232 20051 docker_service.go:262] Setting cgroupDriver to cgroupfs
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.532834 20051 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.533812 20051 kuberuntime_manager.go:186] Container runtime docker initialized, version: 18.05.0-ce, apiVersion: 1.37.0
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.534071 20051 csi_plugin.go:61] kubernetes.io/csi: plugin initializing...
Jun 22 06:45:55 dev-master hyperkube[20051]: W0622 06:45:55.534846 20051 kubelet.go:903] Accelerators feature is deprecated and will be removed in v1.11. Please use device plugins instead. They can be enabled using the DevicePlugins feature gate.
Jun 22 06:45:55 dev-master hyperkube[20051]: W0622 06:45:55.535035 20051 kubelet.go:909] GPU manager init error: couldn't get a handle to the library: unable to open a handle to the library, GPU feature is disabled.
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.535082 20051 server.go:129] Starting to listen on 0.0.0.0:10250
Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.535164 20051 kubelet.go:1282] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.535189 20051 server.go:944] Started kubelet
Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.535555 20051 event.go:209] Unable to write event: 'Post https://10.50.50.201:8001/api/v1/namespaces/default/events: dial tcp 10.50.50.201:8001: getsockopt: connection refused' (may retry after sleeping)
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.535825 20051 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536202 20051 status_manager.go:140] Starting to sync pod status with apiserver
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536253 20051 kubelet.go:1782] Starting kubelet main sync loop.
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536285 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536464 20051 volume_manager.go:247] Starting Kubelet Volume Manager
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536613 20051 desired_state_of_world_populator.go:129] Desired state populator starts to run
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.538574 20051 server.go:299] Adding debug handlers to kubelet server.
Jun 22 06:45:55 dev-master hyperkube[20051]: W0622 06:45:55.538664 20051 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.539199 20051 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.636465 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down]
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.636795 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.638630 20051 kubelet_node_status.go:83] Attempting to register node 10.50.50.201
Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.638954 20051 kubelet_node_status.go:107] Unable to register node "10.50.50.201" with API server: Post https://10.50.50.201:8001/api/v1/nodes: dial tcp 10.50.50.201:8001: getsockopt: connection refused
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.836686 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down]
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.839219 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.841028 20051 kubelet_node_status.go:83] Attempting to register node 10.50.50.201
Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.841357 20051 kubelet_node_status.go:107] Unable to register node "10.50.50.201" with API server: Post https://10.50.50.201:8001/api/v1/nodes: dial tcp 10.50.50.201:8001: getsockopt: connection refused
Jun 22 06:45:56 dev-master hyperkube[20051]: I0622 06:45:56.236826 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down]
Jun 22 06:45:56 dev-master hyperkube[20051]: I0622 06:45:56.241590 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach
Jun 22 06:45:56 dev-master hyperkube[20051]: I0622 06:45:56.245081 20051 kubelet_node_status.go:83] Attempting to register node 10.50.50.201
Jun 22 06:45:56 dev-master hyperkube[20051]: E0622 06:45:56.245475 20051 kubelet_node_status.go:107] Unable to register node "10.50.50.201" with API server: Post https://10.50.50.201:8001/api/v1/nodes: dial tcp 10.50.50.201:8001: getsockopt: connection refused
Jun 22 06:45:56 dev-master hyperkube[20051]: E0622 06:45:56.492206 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://10.50.50.201:8001/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: connection refused
Jun 22 06:45:56 dev-master hyperkube[20051]: E0622 06:45:56.493216 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.50.50.201:8001/api/v1/pods?fieldSelector=spec.nodeName%3D10.50.50.201&limit=500&resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: co
Jun 22 06:45:56 dev-master hyperkube[20051]: E0622 06:45:56.494240 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://10.50.50.201:8001/api/v1/nodes?fieldSelector=metadata.name%3D10.50.50.201&limit=500&resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: connecti
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.036893 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down]
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.045705 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.047489 20051 kubelet_node_status.go:83] Attempting to register node 10.50.50.201
Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.047787 20051 kubelet_node_status.go:107] Unable to register node "10.50.50.201" with API server: Post https://10.50.50.201:8001/api/v1/nodes: dial tcp 10.50.50.201:8001: getsockopt: connection refused
Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.413319 20051 event.go:209] Unable to write event: 'Post https://10.50.50.201:8001/api/v1/namespaces/default/events: dial tcp 10.50.50.201:8001: getsockopt: connection refused' (may retry after sleeping)
Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.492781 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://10.50.50.201:8001/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: connection refused
Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.493560 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.50.50.201:8001/api/v1/pods?fieldSelector=spec.nodeName%3D10.50.50.201&limit=500&resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: co
Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.494574 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://10.50.50.201:8001/api/v1/nodes?fieldSelector=metadata.name%3D10.50.50.201&limit=500&resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: connecti
Jun 22 06:45:57 dev-master hyperkube[20051]: W0622 06:45:57.549477 20051 manager.go:340] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.659932 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.661447 20051 cpu_manager.go:155] [cpumanager] starting with none policy
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.661459 20051 cpu_manager.go:156] [cpumanager] reconciling every 10s
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.661468 20051 policy_none.go:42] [cpumanager] none policy: Start
Jun 22 06:45:57 dev-master hyperkube[20051]: W0622 06:45:57.661523 20051 fs.go:539] stat failed on /dev/loop10 with error: no such file or directory
Jun 22 06:45:57 dev-master hyperkube[20051]: F0622 06:45:57.661535 20051 kubelet.go:1359] Failed to start ContainerManager failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 126 in cached partitions map
Jun 22 06:45:57 dev-master systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Jun 22 06:45:57 dev-master systemd[1]: kubelet.service: Failed with result 'exit-code'.
Run the following command on all your nodes. It worked for me.
swapoff -a
I have found a lot of the same error messages in your logs:
dial tcp 10.50.50.201:8001: getsockopt: connection refused
There may be several problems:
IP address and/or Port are incorrect
no access from the Worker to the Master
something wrong with your Master, for example, kube-apiserver is down
You should look in that direction.
user1188867's answer is definitely correct.
I want to add a piece of information for further reference for those not using Ubuntu.
I hit this on a cluster with Clear Linux on bare metal. I'm attaching here instructions on how to detect the issue in such environment and disable swap to solve.
First, launching sudo systemctl status kubelet after reboot produces the following:
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─0-cni.conf, 0-containerd.conf, 10-cgroup-driver.conf
Active: activating (auto-restart) (Result: exit-code) since Thu 2020-12-17 11:04:37 CET; 2s ago
Docs: http://kubernetes.io/docs/
Process: 2404 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, st>
Process: 2405 ExecStartPost=/usr/bin/kubelet-version-check.sh store (code=exited, status=0/SUCCESS)
Main PID: 2404 (code=exited, status=255/EXCEPTION)
CPU: 683ms
The issue was actually the existence of the swap file.
To disable it:
add the nofail option to /usr/lib/systemd/system/var-swapfile.swap. I personally used the following one-liner: sudo sed -i s/Options=/Options=nofail,/ /usr/lib/systemd/system/var-swapfile.swap
Disable swapping: sudo swapoff -a
Delete the swap file: sudo rm /var/swapfile
This procedure on Clear Linux persists the deactivation of the swap on reboots.
I had the same exit status, but my kubelet failed to start due to a limitation on the number of max_user_watches. The following got the kubelet working again
https://github.com/google/cadvisor/issues/1581#issuecomment-367616070
This problem can also arise if the service is not enabled after the Docker installation. After the next reboot, Docker is not started and kubelet cannot start either. So don't forget after the Docker installation:
sudo systemctl enable docker
kubeadm init seems to be hanging when I started using vsphere cloud provider. Followed instructions from here - Anybody got it working with 1.9?
root#master-0:~# kubeadm init --config /tmp/kube.yaml
[init] Using Kubernetes version: v1.9.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING Hostname]: hostname "master-0" could not be reached
[WARNING Hostname]: hostname "master-0" lookup master-0 on 8.8.8.8:53: no such host
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master-0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.11.0.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
Master os details
root#master-0:~# uname -r
4.4.0-21-generic
root#master-0:~# docker version
Client:
Version: 17.03.2-ce
API version: 1.27
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 03:35:14 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.2-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 03:35:14 2017
OS/Arch: linux/amd64
Experimental: false
root#master-0:~# cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04 LTS (Xenial Xerus)"
ID=ubuntu
kubelet service seems to be running fine
root#master-0:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf, 90-local-extras.conf
Active: active (running) since Mon 2018-01-22 11:25:00 UTC; 13min ago
Docs: http://kubernetes.io/docs/
Main PID: 4270 (kubelet)
Tasks: 13 (limit: 512)
Memory: 37.6M
CPU: 11.626s
CGroup: /system.slice/kubelet.service
└─4270 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeco
nfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10
--cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.cr
t --cadvisor-port=0 --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki
journalctl -f -u kubelet has some connection refused errors which probably networking service is missing. Those errors should go away when networking service is installed post kubeadm init
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.759764 1184 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.761350 1184 controller.go:114] kubelet confi
g controller: starting controller
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.762632 1184 controller.go:118] kubelet confi
g controller: validating combination of defaults and flags
Jan 22 11:17:46 localhost systemd[1]: Started Kubernetes systemd probe.
Jan 22 11:17:46 localhost kubelet[1184]: W0122 11:17:46.070619 1184 cni.go:171] Unable to update cni
config: No networks found in /etc/cni/net.d
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.081384 1184 server.go:182] Version: v1.9.1
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.081417 1184 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.082271 1184 plugins.go:101] No cloud provide
r specified.
Jan 22 11:17:46 localhost kubelet[1184]: error: failed to run Kubelet: unable to load bootstrap kubecon
fig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILU
RE
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Unit entered failed state.
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 22 11:17:56 localhost systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Jan 22 11:17:56 localhost systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jan 22 11:17:56 localhost systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.410883 1229 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.411198 1229 controller.go:114] kubelet confi
g controller: starting controller
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.411353 1229 controller.go:118] kubelet confi
g controller: validating combination of defaults and flags
Jan 22 11:17:56 localhost systemd[1]: Started Kubernetes systemd probe.
Jan 22 11:17:56 localhost kubelet[1229]: W0122 11:17:56.424264 1229 cni.go:171] Unable to update cni
config: No networks found in /etc/cni/net.d
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429102 1229 server.go:182] Version: v1.9.1
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429156 1229 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429247 1229 plugins.go:101] No cloud provide
r specified.
Jan 22 11:17:56 localhost kubelet[1229]: E0122 11:17:56.461608 1229 certificate_manager.go:314] Fail
ed while requesting a signed certificate from the master: cannot create certificate signing request: Po
st https://10.11.0.101:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: dial tcp 10.11
.0.101:6443: getsockopt: connection refused
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.491374 1229 server.go:428] --cgroups-per-qos
enabled, but --cgroup-root was not specified. defaulting to /
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.492069 1229 container_manager_linux.go:242]
container manager verified user specified cgroup-root exists: /
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.492102 1229 container_manager_linux.go:247]
Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: Kubelet
CgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootD
ir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemRe
servedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvict
ionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeri
od:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1
} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Pe
rcentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quan
tity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] Experiment
alCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
docker ps, controller & scheduler logs
root#master-0:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6db549891439 677911f7ae8f "kube-scheduler --..." About an hour ago Up About an hour k8s_kube-scheduler_kube-scheduler-master-0_kube-system_df32e281019039e73be77e3f53d09596_0
4f7ddefbd86e 4978f9a64966 "kube-controller-m..." About an hour ago Up About an hour k8s_kube-controller-manager_kube-controller-manager-master-0_kube-system_34bad395be69e74db6304d6c4218c536_0
18604db89db6 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-scheduler-master-0_kube-system_df32e281019039e73be77e3f53d09596_0
252b86ea4b5e gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-controller-manager-master-0_kube-system_34bad395be69e74db6304d6c4218c536_0
4021061bf8a6 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-apiserver-master-0_kube-system_7a3ae9279d0ca7b4ada8333fbe7442ed_0
4f94163d313b gcr.io/google_containers/etcd-amd64:3.1.10 "etcd --name=etcd0..." About an hour ago Up About an hour 0.0.0.0:2379-2380->2379-2380/tcp, 0.0.0.0:4001->4001/tcp, 7001/tcp etcd
root#master-0:~# docker logs -f 4f7ddefbd86e
I0122 11:25:06.253706 1 controllermanager.go:108] Version: v1.9.1
I0122 11:25:06.258712 1 leaderelection.go:174] attempting to acquire leader lease...
E0122 11:25:06.259448 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:09.711377 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:13.969270 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:17.564964 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:20.616174 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
root#master-0:~# docker logs -f 6db549891439
W0122 11:25:06.285765 1 server.go:159] WARNING: all flags than --config are deprecated. Please begin using a config file ASAP.
I0122 11:25:06.292865 1 server.go:551] Version: v1.9.1
I0122 11:25:06.295776 1 server.go:570] starting healthz server on 127.0.0.1:10251
E0122 11:25:06.295947 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: Get https://10.11.0.101:6443/apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296027 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: Get https://10.11.0.101:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296092 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: Get https://10.11.0.101:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296160 1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: Get https://10.11.0.101:6443/api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296218 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: Get https://10.11.0.101:6443/apis/apps/v1beta1/statefulsets?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.297374 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: Get https://10.11.0.101:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
There was a bug in the controller manager when starting with the vsphere cloud provider. See https://github.com/kubernetes/kubernetes/issues/57279, fixed in 1.9.2
the service is not starting and the listener is not activated on port 8080.
here is my kubernetes configuration:
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://centos-master:8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"
systemctl status kube-apiserver -l
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Mon 2017-08-14 12:07:04 +0430; 29s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 2087 ExecStart=/usr/bin/kube-apiserver $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBE_ETCD_SERVERS $KUBE_API_ADDRESS $KUBE_API_PORT $KUBELET_PORT $KUBE_ALLOW_PRIV $KUBE_SERVICE_ADDRESSES $KUBE_ADMISSION_CONTROL $KUBE_API_ARGS (code=exited, status=2)
Main PID: 2087 (code=exited, status=2)
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service: main process exited, code=exited, status=2/INVALIDARGUMENT
Aug 14 12:07:04 centos-master systemd[1]: Failed to start Kubernetes API Server.
Aug 14 12:07:04 centos-master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service failed.
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service holdoff time over, scheduling restart.
Aug 14 12:07:04 centos-master systemd[1]: start request repeated too quickly for kube-apiserver.service
Aug 14 12:07:04 centos-master systemd[1]: Failed to start Kubernetes API Server.
Aug 14 12:07:04 centos-master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service failed.
tail -n 1000 /var/log/messages
resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.240160 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466: Failed to list *api.PersistentVolume: Get http://centos-master:8080/api/v1/persistentvolumes?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.242039 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:470: Failed to list *api.Service: Get http://centos-master:8080/api/v1/services?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.242924 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:457: Failed to list *api.Pod: Get http://centos-master:8080/api/v1/pods?fieldSelector=spec.nodeName%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.269386 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:473: Failed to list *api.ReplicationController: Get http://centos-master:8080/api/v1/replicationcontrollers?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.285782 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481: Failed to list *extensions.ReplicaSet: Get http://centos-master:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.286529 606 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.PersistentVolumeClaim: Get http://centos-master:8080/api/v1/persistentvolumeclaims?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
systemd[1]: kube-apiserver.service: main process exited, code=exited, status=2/INVALIDARGUMENT
The arguments you're using do not seem valid.
Check the list of valid arguments here.
You can also follow the Kubernetes The Hard Way guide for a trusted way to run the API server.