Error in starting the Kubernetes minikube cluster - kubernetes

I have Kubernetes v1.10.0 and minikube v0.28 installed in MacOS 10.13.5. But while starting the minikube, I am getting constant errors:
$ minikube start
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0708 16:13:43.193267 13864 start.go:294] Error starting cluster: timed out waiting to unmark master: getting node minikube: Get https://192.168.99.100:8443/api/v1/nodes/minikube: dial tcp 192.168.99.100:8443: i/o timeout
i have also tried $minikube start and $minikube delete as well as reinstallation of minikubes versions but that didn’t help.
$ minikube version
minikube version: v0.28.0
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout
Even I am not able to ping the virtual machine . But I can see the minikube VM in my virtualBox in Running status.
Minikube Logs:
F0708 16:46:10.394098 2651 server.go:233] failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory
Jul 08 16:46:10 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Jul 08 16:46:10 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jul 08 16:46:20 minikube systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Jul 08 16:46:20 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Jul 08 16:46:20 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jul 08 16:46:20 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jul 08 16:46:20 minikube kubelet[2730]: Flag --cadvisor-port has been deprecated, The default will change to 0 (disabled) in 1.12, and the cadvisor port will be removed entirely in 1.13
Jul 08 16:46:20 minikube kubelet[2730]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 08 16:46:20 minikube kubelet[2730]: Flag --allow-privileged has been deprecated, will be removed in a future version
Jul 08 16:46:20 minikube kubelet[2730]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 08 16:46:20 minikube kubelet[2730]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 08 16:46:20 minikube kubelet[2730]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 08 16:46:20 minikube kubelet[2730]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 08 16:46:20 minikube kubelet[2730]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 08 16:46:20 minikube kubelet[2730]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.694078 2730 feature_gate.go:226] feature gates: &{{} map[]}
Jul 08 16:46:20 minikube kubelet[2730]: W0708 16:46:20.702932 2730 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.703121 2730 server.go:376] Version: v1.10.0
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.703187 2730 feature_gate.go:226] feature gates: &{{} map[]}
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.703285 2730 plugins.go:89] No cloud provider specified.
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.726584 2730 server.go:613] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.726934 2730 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: /
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.726983 2730 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true}
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.728063 2730 container_manager_linux.go:266] Creating device plugin manager: true
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.728189 2730 state_mem.go:36] [cpumanager] initializing new in-memory state store
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.728313 2730 state_file.go:82] [cpumanager] state file: created new state file "/var/lib/kubelet/cpu_manager_state"
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.728418 2730 kubelet.go:272] Adding pod path: /etc/kubernetes/manifests
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.728460 2730 kubelet.go:297] Watching apiserver
Jul 08 16:46:20 minikube kubelet[2730]: E0708 16:46:20.734450 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.99.100:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:20 minikube kubelet[2730]: E0708 16:46:20.734566 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.99.100:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:20 minikube kubelet[2730]: E0708 16:46:20.735067 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.99.100:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:20 minikube kubelet[2730]: W0708 16:46:20.735453 2730 kubelet_network.go:139] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.735561 2730 kubelet.go:556] Hairpin mode set to "hairpin-veth"
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.744020 2730 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.744065 2730 client.go:104] Start docker client with request timeout=2m0s
Jul 08 16:46:20 minikube kubelet[2730]: W0708 16:46:20.750158 2730 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.752473 2730 docker_service.go:244] Docker cri networking managed by kubernetes.io/no-op
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.756618 2730 docker_service.go:249] Docker Info: &{ID:K2FS:LJJY:RYGP:JKVS:74G5:3HA4:L26I:VOFW:CGF5:JB6F:BMQV:G3GO Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2018-07-08T16:46:20.753639901Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.16.14 OperatingSystem:Buildroot 2018.05 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc420122fc0 NCPU:2 MemTotal:2088189952 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:minikube Labels:[provider=virtualbox] ExperimentalBuild:false ServerVersion:17.12.1-ce ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9b55aab90508bd389d7654c4baf173a981477d55 Expected:9b55aab90508bd389d7654c4baf173a981477d55} RuncCommit:{ID:9f9c96235cc97674e935002fc3d78361b696a69e Expected:9f9c96235cc97674e935002fc3d78361b696a69e} InitCommit:{ID:N/A Expected:} SecurityOptions:[name=seccomp,profile=default]}
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.756704 2730 docker_service.go:262] Setting cgroupDriver to cgroupfs
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.768856 2730 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.772344 2730 kuberuntime_manager.go:186] Container runtime docker initialized, version: 17.12.1-ce, apiVersion: 1.35.0
Jul 08 16:46:20 minikube kubelet[2730]: W0708 16:46:20.772666 2730 probe.go:215] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.772949 2730 csi_plugin.go:61] kubernetes.io/csi: plugin initializing...
Jul 08 16:46:20 minikube kubelet[2730]: E0708 16:46:20.821518 2730 kubelet.go:1277] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.822056 2730 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.822138 2730 status_manager.go:140] Starting to sync pod status with apiserver
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.822179 2730 kubelet.go:1777] Starting kubelet main sync loop.
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.822215 2730 kubelet.go:1794] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.822420 2730 server.go:129] Starting to listen on 0.0.0.0:10250
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.822963 2730 server.go:299] Adding debug handlers to kubelet server.
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.823936 2730 volume_manager.go:247] Starting Kubelet Volume Manager
Jul 08 16:46:20 minikube kubelet[2730]: E0708 16:46:20.824420 2730 event.go:209] Unable to write event: 'Post https://192.168.99.100:8443/api/v1/namespaces/default/events: dial tcp 192.168.99.100:8443: getsockopt: connection refused' (may retry after sleeping)
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.825365 2730 server.go:944] Started kubelet
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.825492 2730 desired_state_of_world_populator.go:129] Desired state populator starts to run
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.922627 2730 kubelet.go:1794] skipping pod synchronization - [container runtime is down]
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.924838 2730 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.927054 2730 kubelet_node_status.go:82] Attempting to register node minikube
Jul 08 16:46:20 minikube kubelet[2730]: E0708 16:46:20.927586 2730 kubelet_node_status.go:106] Unable to register node "minikube" with API server: Post https://192.168.99.100:8443/api/v1/nodes: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:21 minikube kubelet[2730]: I0708 16:46:21.123837 2730 kubelet.go:1794] skipping pod synchronization - [container runtime is down]
Jul 08 16:46:21 minikube kubelet[2730]: I0708 16:46:21.127937 2730 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 08 16:46:21 minikube kubelet[2730]: I0708 16:46:21.130362 2730 kubelet_node_status.go:82] Attempting to register node minikube
Jul 08 16:46:21 minikube kubelet[2730]: E0708 16:46:21.130766 2730 kubelet_node_status.go:106] Unable to register node "minikube" with API server: Post https://192.168.99.100:8443/api/v1/nodes: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:21 minikube kubelet[2730]: I0708 16:46:21.524092 2730 kubelet.go:1794] skipping pod synchronization - [container runtime is down]
Jul 08 16:46:21 minikube kubelet[2730]: I0708 16:46:21.531377 2730 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 08 16:46:21 minikube kubelet[2730]: I0708 16:46:21.534027 2730 kubelet_node_status.go:82] Attempting to register node minikube
Jul 08 16:46:21 minikube kubelet[2730]: E0708 16:46:21.534443 2730 kubelet_node_status.go:106] Unable to register node "minikube" with API server: Post https://192.168.99.100:8443/api/v1/nodes: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:21 minikube kubelet[2730]: E0708 16:46:21.736215 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.99.100:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:21 minikube kubelet[2730]: E0708 16:46:21.741184 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.99.100:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:21 minikube kubelet[2730]: E0708 16:46:21.742867 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.99.100:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:22 minikube kubelet[2730]: I0708 16:46:22.324217 2730 kubelet.go:1794] skipping pod synchronization - [container runtime is down]
Jul 08 16:46:22 minikube kubelet[2730]: I0708 16:46:22.334948 2730 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 08 16:46:22 minikube kubelet[2730]: I0708 16:46:22.337989 2730 kubelet_node_status.go:82] Attempting to register node minikube
Jul 08 16:46:22 minikube kubelet[2730]: E0708 16:46:22.338431 2730 kubelet_node_status.go:106] Unable to register node "minikube" with API server: Post https://192.168.99.100:8443/api/v1/nodes: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:22 minikube kubelet[2730]: E0708 16:46:22.737509 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.99.100:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:22 minikube kubelet[2730]: E0708 16:46:22.741882 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.99.100:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:22 minikube kubelet[2730]: E0708 16:46:22.743575 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.99.100:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:22 minikube kubelet[2730]: I0708 16:46:22.908788 2730 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 08 16:46:22 minikube kubelet[2730]: I0708 16:46:22.910882 2730 cpu_manager.go:155] [cpumanager] starting with none policy
Jul 08 16:46:22 minikube kubelet[2730]: I0708 16:46:22.910923 2730 cpu_manager.go:156] [cpumanager] reconciling every 10s
Jul 08 16:46:22 minikube kubelet[2730]: I0708 16:46:22.910931 2730 policy_none.go:42] [cpumanager] none policy: Start
Jul 08 16:46:22 minikube kubelet[2730]: Starting Device Plugin manager
Jul 08 16:46:22 minikube kubelet[2730]: E0708 16:46:22.922215 2730 eviction_manager.go:246] eviction manager: failed to get get summary stats: failed to get node info: node "minikube" not found
Jul 08 16:46:23 minikube kubelet[2730]: E0708 16:46:23.739107 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.99.100:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:23 minikube kubelet[2730]: E0708 16:46:23.742661 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.99.100:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:23 minikube kubelet[2730]: E0708 16:46:23.744902 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.99.100:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:23 minikube kubelet[2730]: I0708 16:46:23.925063 2730 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 08 16:46:23 minikube kubelet[2730]: I0708 16:46:23.936151 2730 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 08 16:46:23 minikube kubelet[2730]: I0708 16:46:23.936545 2730 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 08 16:46:23 minikube kubelet[2730]: I0708 16:46:23.938910 2730 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 08 16:46:23 minikube kubelet[2730]: I0708 16:46:23.946028 2730 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/099f1c2b79126109140a1f77e211df00-kubeconfig") pod "kube-scheduler-minikube" (UID: "099f1c2b79126109140a1f77e211df00")
Jul 08 16:46:23 minikube kubelet[2730]: W0708 16:46:23.946195 2730 status_manager.go:461] Failed to get status for pod "kube-scheduler-minikube_kube-system(099f1c2b79126109140a1f77e211df00)": Get https://192.168.99.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-minikube: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:23 minikube kubelet[2730]: I0708 16:46:23.946773 2730 kubelet_node_status.go:82] Attempting to register node minikube
Jul 08 16:46:23 minikube kubelet[2730]: E0708 16:46:23.947106 2730 kubelet_node_status.go:106] Unable to register node "minikube" with API server: Post https://192.168.99.100:8443/api/v1/nodes: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:23 minikube kubelet[2730]:

Its working as expected, earlier I was facing connectivity problem from host to vm , that time I was using hotspot internet connection. That might be the issue to connect my vm .
Now on Home wifi the minikube delete and minikube start works perfectly .
Thanks

Related

Installing kubernetes cluster on master node

I am new to container worrld and trying to setup a kubernetes cluster locally in two linux VMs. During the cluster initialization it got stuck at
[apiclient] Created API client, waiting for the control plane to
become ready
I have followed the pre-flight check steps,
[root#lm--kube-glusterfs--central ~]# kubeadm init --pod-network-cidr=10.244.0.0/16
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.0
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] WARNING: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Starting the kubelet service
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [lm--kube-glusterfs--central kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.99.7.215]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
OS : Red Hat Enterprise Linux Server release 7.4 (Maipo)
Kuberneter version :
kubeadm-1.6.0-0.x86_64.rpm
kubectl-1.6.0-0.x86_64.rpm
kubelet-1.6.0-0.x86_64.rpm
kubernetes-cni-0.6.0-0.x86_64.rpm
cri-tools-1.12.0-0.x86_64.rpm
How to debug the issue or is there any version mismatch of kube versions. Same was working before when i use google.cloud.repo to install yum -y install kubelet kubeadm kubectl .
I couldnot use repo due to some firewall issues. Hence using the rpms to install.
After executing the following command,journalctl -xeu kubelet
Jul 02 09:45:09 lm--son-config-cn--central kubelet[28749]: W0702 09:45:09.841246 28749 kubelet_network.go:70] Hairpin mode set to "promiscuous-bridge" but kubenet
Jul 02 09:45:09 lm--son-config-cn--central kubelet[28749]: I0702 09:45:09.841304 28749 kubelet.go:494] Hairpin mode set to "hairpin-veth"
Jul 02 09:45:09 lm--son-config-cn--central kubelet[28749]: W0702 09:45:09.845626 28749 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 02 09:45:09 lm--son-config-cn--central kubelet[28749]: I0702 09:45:09.857969 28749 docker_service.go:187] Docker cri networking managed by kubernetes.io/no-op
Jul 02 09:45:09 lm--son-config-cn--central kubelet[28749]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs
Jul 02 09:45:09 lm--son-config-cn--central systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Jul 02 09:45:09 lm--son-config-cn--central systemd[1]: Unit kubelet.service entered failed state.
Jul 02 09:45:09 lm--son-config-cn--central systemd[1]: kubelet.service failed.
Jul 02 09:45:20 lm--son-config-cn--central systemd[1]: kubelet.service holdoff time over, scheduling restart.
Jul 02 09:45:20 lm--son-config-cn--central systemd[1]: Started Kubernetes Kubelet Server.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Jul 02 09:45:20 lm--son-config-cn--central systemd[1]: Starting Kubernetes Kubelet Server...
-- Subject: Unit kubelet.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has begun starting up.
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.251465 28772 feature_gate.go:144] feature gates: map[]
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: W0702 09:45:20.251889 28772 server.go:469] No API client: no api servers specified
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.252009 28772 docker.go:364] Connecting to docker on unix:///var/run/docker.sock
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.252049 28772 docker.go:384] Start docker client with request timeout=2m0s
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: W0702 09:45:20.259436 28772 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.275674 28772 manager.go:143] cAdvisor running in container: "/system.slice"
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: W0702 09:45:20.317509 28772 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial r
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.328881 28772 fs.go:117] Filesystem partitions: map[/dev/vda2:{mountpoint:/ major:253 mino
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.335711 28772 manager.go:198] Machine: {NumCores:8 CpuFrequency:2095078 MemoryCapacity:337
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: [7] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unifie
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.338001 28772 manager.go:204] Version: {KernelVersion:3.10.0-693.11.6.el7.x86_64 Container
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: W0702 09:45:20.338967 28772 server.go:350] No api server defined - no events will be sent to API server.
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.338980 28772 server.go:509] --cgroups-per-qos enabled, but --cgroup-root was not specifie
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.342041 28772 container_manager_linux.go:245] container manager verified user specified cg
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.342071 28772 container_manager_linux.go:250] Creating Container Manager object based on N
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: W0702 09:45:20.346505 28772 kubelet_network.go:70] Hairpin mode set to "promiscuous-bridge" but kubenet
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.346571 28772 kubelet.go:494] Hairpin mode set to "hairpin-veth"
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: W0702 09:45:20.351473 28772 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.363583 28772 docker_service.go:187] Docker cri networking managed by kubernetes.io/no-op
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs
Jul 02 09:45:20 lm--son-config-cn--central systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Jul 02 09:45:20 lm--son-config-cn--central systemd[1]: Unit kubelet.service entered failed state.
Jul 02 09:45:20 lm--son-config-cn--central systemd[1]: kubelet.service failed.
Related to issue
There are a few fixes shown there , all you need is to change the cgroup driver to systemd

cannot use kubeadm init single kubernetes cluster

I am a newbie and are learning kubernetes,when i try to build kubernetes cluster, i meet some questions.
here's my questions, i don't know how to resolved it
When I enter the command:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
I have encountered these problems:
enter image description here
i tried to specified --apiserver-advertise-address=$(hostname - i), i meet another question:
$sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address 10.181.144.168
I0316 21:11:04.632624 37136 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: dial tcp 23.236.58.218:443: connect: network is unreachable
I0316 21:11:04.632720 37136 version.go:94] falling back to the local client version: v1.12.2
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [vsearchtradeg1host010181144168.et2 localhost] and IPs [10.181.144.168 127.0.0.1 ::1]
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [vsearchtradeg1host010181144168.et2 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [vsearchtradeg1host010181144168.et2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.181.144.168]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[certificates] Generated sa key and public key.
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
d[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
d[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
d[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
dd[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster
Most answers told me i need to access internet. But i've downloaded core images
$sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.12.2 15e9da1ca195 4 months ago 96.5MB
k8s.gcr.io/kube-apiserver v1.12.2 51a9c329b7c5 4 months ago 194MB
k8s.gcr.io/kube-controller-manager v1.12.2 15548c720a70 4 months ago 164MB
k8s.gcr.io/kube-scheduler v1.12.2 d6d57c76136c 4 months ago 58.3MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 5 months ago 220MB
k8s.gcr.io/coredns 1.2.2 367cdc8433a4 6 months ago 39.2MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 15 months ago 742kB
And i find the log from /var/log/message:
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 systemd[1]: kubelet.service holdoff time over, scheduling restart.
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 systemd[1]: Started Kubernetes systemd probe.
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 systemd[1]: Starting Kubernetes systemd probe.
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.617395 81273 server.go:408] Version: v1.12.2
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.617559 81273 plugins.go:99] No cloud provider specified.
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.620585 81273 certificate_store.go:131] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.654038 81273 server.go:667] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.654274 81273 container_manager_linux.go:247] container manager verified user specified cgroup-root exists: []
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.654289 81273 container_manager_linux.go:252] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms}
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.654396 81273 container_manager_linux.go:271] Creating device plugin manager: true
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.654432 81273 state_mem.go:36] [cpumanager] initializing new in-memory state store
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.654579 81273 state_mem.go:84] [cpumanager] updated default cpuset: ""
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.654599 81273 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.654695 81273 kubelet.go:279] Adding pod path: /etc/kubernetes/manifests
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.654745 81273 kubelet.go:304] Watching apiserver
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: E0318 10:48:39.655393 81273 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://10.181.144.168:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.181.144.168:6443: connect: connection refused
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: E0318 10:48:39.655393 81273 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://10.181.144.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dvsearchtradeg1host010181144168.et2&limit=500&resourceVersion=0: dial tcp 10.181.144.168:6443: connect: connection refused
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: E0318 10:48:39.655490 81273 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.181.144.168:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dvsearchtradeg1host010181144168.et2&limit=500&resourceVersion=0: dial tcp 10.181.144.168:6443: connect: connection refused
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.656470 81273 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.656489 81273 client.go:104] Start docker client with request timeout=2m0s
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.657494 81273 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.657515 81273 docker_service.go:236] Hairpin mode set to "hairpin-veth"
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.657592 81273 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.659005 81273 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.659058 81273 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.659091 81273 docker_service.go:251] Docker cri networking managed by cni
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.671559 81273 docker_service.go:256] Docker Info: &{ID:FGTF:A4SR:ARMW:4TII:2HCM:CT3G:NYZA:XRMB:CSHA:E5X6:TWCE:5JIP Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:8 Driver:overlay DriverStatus:[[Backing Filesystem extfs] [Supports d_type true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:18 OomKillDisable:true NGoroutines:26 SystemTime:2019-03-18T10:48:39.661768258+08:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-327.ali2018.alios7.x86_64 OperatingSystem:Alibaba Group Enterprise Linux Server 7.2 (Paladin) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc420925f10 NCPU:32 MemTotal:134992273408 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:vsearchtradeg1host010181144168.et2 Labels:[] ExperimentalBuild:false ServerVersion:17.06.2-ce ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:6e23458c129b551d5c9871e5174f6b1b7f6d1170 Expected:6e23458c129b551d5c9871e5174f6b1b7f6d1170} RuncCommit:{ID:b0917904e049873e6fe70520b9a049b8cb3a9ea2 Expected:b0917904e049873e6fe70520b9a049b8cb3a9ea2} InitCommit:{ID:949e6fa Expected:949e6fa} SecurityOptions:[name=seccomp,profile=default]}
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.671630 81273 docker_service.go:269] Setting cgroupDriver to cgroupfs
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.685561 81273 kuberuntime_manager.go:197] Container runtime docker initialized, version: 17.06.2-ce, apiVersion: 1.30.0
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.688250 81273 server.go:1013] Started kubelet
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: E0318 10:48:39.688286 81273 kubelet.go:1287] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.688306 81273 server.go:133] Starting to listen on 0.0.0.0:10250
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: E0318 10:48:39.688620 81273 event.go:212] Unable to write event: 'Post https://10.181.144.168:6443/api/v1/namespaces/default/events: dial tcp 10.181.144.168:6443: connect: connection refused' (may retry after sleeping)
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.688664 81273 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.688694 81273 status_manager.go:152] Starting to sync pod status with apiserver
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.688707 81273 kubelet.go:1804] Starting kubelet main sync loop.
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.688726 81273 kubelet.go:1821] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.688746 81273 volume_manager.go:248] Starting Kubelet Volume Manager
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.688770 81273 desired_state_of_world_populator.go:130] Desired state populator starts to run
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.689076 81273 server.go:318] Adding debug handlers to kubelet server.
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.689461 81273 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: E0318 10:48:39.689645 81273 kubelet.go:2167] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.709949 81273 container.go:507] Failed to update stats for container "/": failure - /sys/fs/cgroup/cpuset,cpu,cpuacct/cpuacct.stat is expected to have 4 fields, continuing to push stats
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.750913 81273 container.go:507] Failed to update stats for container "/agent": failure - /sys/fs/cgroup/cpuset,cpu,cpuacct/agent/cpuacct.stat is expected to have 4 fields, continuing to push stats
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.764355 81273 container.go:507] Failed to update stats for container "/system.slice/crond.service": failure - /sys/fs/cgroup/cpuset,cpu,cpuacct/system.slice/crond.service/cpuacct.stat is expected to have 4 fields, continuing to push stats
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.764979 81273 container.go:507] Failed to update stats for container "/system.slice/staragentctl.service": failure - /sys/fs/cgroup/cpuset,cpu,cpuacct/system.slice/staragentctl.service/cpuacct.stat is expected to have 4 fields, continuing to push stats
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.765499 81273 container.go:507] Failed to update stats for container "/system.slice/systemd-journal-catalog-update.service": failure - /sys/fs/cgroup/cpuset,cpu,cpuacct/system.slice/systemd-journal-catalog-update.service/cpuacct.stat is expected to have 4 fields, continuing to push stats
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.766343 81273 container.go:507] Failed to update stats for container "/system.slice/systemd-remount-fs.service": failure - /sys/fs/cgroup/cpuset,cpu,cpuacct/system.slice/systemd-remount-fs.service/cpuacct.stat is expected to have 4 fields, continuing to push stats
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.766793 81273 container.go:507] Failed to update stats for container "/system.slice/docker.service": failure - /sys/fs/cgroup/cpuset,cpu,cpuacct/system.slice/docker.service/cpuacct.stat is expected to have 4 fields, continuing to push stats
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.766966 81273 container.go:507] Failed to update stats for container "/system.slice/z_nic_irq_set.service": failure - /sys/fs/cgroup/cpuset,cpu,cpuacct/system.slice/z_nic_irq_set.service/cpuacct.stat is expected to have 4 fields, continuing to push stats
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.767171 81273 container.go:507] Failed to update stats for container "/system.slice/syslog-ng.service": failure - /sys/fs/cgroup/cpuset,cpu,cpuacct/system.slice/syslog-ng.service/cpuacct.stat is expected to have 4 fields, continuing to push stats
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.767481 81273 container.go:507] Failed to update stats for container "/system.slice/systemd-fsck-root.service": failure - /sys/fs/cgroup/cpuset,cpu,cpuacct/system.slice/systemd-fsck-root.service/cpuacct.stat is expected to have 4 fields, continuing to push stats
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.768221 81273 container.go:507] Failed to update stats for container "/system.slice/systemd-update-utmp.service": failure - /sys/fs/cgroup/cpuset,cpu,cpuacct/system.slice/systemd-update-utmp.service/cpuacct.stat is expected to have 4 fields, continuing to push stats
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.768707 81273 container.go:507] Failed to update stats for container "/agent/logagent": failure - /sys/fs/cgroup/cpuset,cpu,cpuacct/agent/logagent/cpuacct.stat is expected to have 4 fields, continuing to push stats
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.769014 81273 container.go:507] Failed to update stats for container "/system.slice/ntpdate.service": failure - /sys/fs/cgroup/cpuset,cpu,cpuacct/system.slice/ntpdate.service/cpuacct.stat is expected to have 4 fields, continuing to push stats
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.769153 81273 container.go:507] Failed to update stats for container "/system.slice/mcelog.service": failure - /sys/fs/cgroup/cpuset,cpu,cpuacct/system.slice/mcelog.service/cpuacct.stat is expected to have 4 fields, continuing to push stats
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: W0318 10:48:39.769860 81273 container.go:507] Failed to update stats for container "/system.slice/rhel-domainname.service": failure - /sys/fs/cgroup/cpuset,cpu,cpuacct/system.slice/rhel-domainname.service/cpuacct.stat is expected to have 4 fields, continuing to push stats
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.788819 81273 kubelet.go:1821] skipping pod synchronization - [container runtime is down]
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: E0318 10:48:39.788824 81273 kubelet.go:2236] node "vsearchtradeg1host010181144168.et2" not found
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.788842 81273 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.791263 81273 kubelet_node_status.go:70] Attempting to register node vsearchtradeg1host010181144168.et2
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: E0318 10:48:39.791612 81273 kubelet_node_status.go:92] Unable to register node "vsearchtradeg1host010181144168.et2" with API server: Post https://10.181.144.168:6443/api/v1/nodes: dial tcp 10.181.144.168:6443: connect: connection refused
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.847902 81273 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.849675 81273 cpu_manager.go:155] [cpumanager] starting with none policy
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.849686 81273 cpu_manager.go:156] [cpumanager] reconciling every 10s
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: I0318 10:48:39.849695 81273 policy_none.go:42] [cpumanager] none policy: Start
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 kubelet[81273]: F0318 10:48:39.849713 81273 kubelet.go:1359] Failed to start ContainerManager failed to get rootfs info: unable to find data in memory cache
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 systemd[1]: Unit kubelet.service entered failed state.
Mar 18 10:48:39 vsearchtradeg1host010181144168.et2 systemd[1]: kubelet.service failed.
I look forward to someone who can help me answer!
You should pass - - apiserver-advertise-address=$(hostname - i) in init command
Try to launch the command that you posted. you launched a command with the - - apiserver-advertise-address specified. But I think as pointed in the comments you dont have access to the internet, or are behind a proxy or firewall blocking internet access.
Did you have any cluster running prior to this? If so, you have to clean up old configs and re-bootstrap your cluster.
Reset current cluster- kubeadm reset -f
Delete obsolete config (entire directory) - sudo rm -rf /etc/kubernetes/
Pull images once again - kubeadm config images pull
Pass bridged IPv4 traffic to iptables chains. This is a requirement for some CNI plugins to work
sysctl net.bridge.bridge-nf-call-iptables=1
Init the cluster - kubeadm init --pod-network-cidr=10.244.0.0/16
You need to pass in the kubernetes version during init to remove the error regarding not being able to connect to the internet:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=1.12.2 --apiserver-advertise-address=$(hostname -i)
This should work. If the hostname -i doesn't work, you need to use your ip address of your machine via an ifconfig or ip list command.
try with this flag --image-repository and run init

ICP 2.1.0.3 Install Timeout: FAILED - RETRYING: Waiting for kube-dns to start

Looks like issues is because of CNI (calico) but not sure what is the fix in ICP (see journalctl -u kubelet logs below)
ICP Installer Log:
FAILED! => {"attempts": 100, "changed": true, "cmd": "kubectl -n kube-system get daemonset kube-dns -o=custom-columns=A:.status.numberAvailable,B:.status.desiredNumberScheduled --no-headers=true | tr -s \" \" | awk '$1 == $2 {print \"READY\"}'", "delta": "0:00:00.403489", "end": "2018-07-08 09:04:21.922839", "rc": 0, "start": "2018-07-08 09:04:21.519350", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
journalctl -u kubelet:
Jul 08 22:40:38 dev-master hyperkube[2763]: E0708 22:40:38.548157 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: nodes is forbidden: User "kubelet" cannot list nodes at the cluster scope
Jul 08 22:40:38 dev-master hyperkube[2763]: E0708 22:40:38.549872 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "kubelet" cannot list pods at the cluster scope
Jul 08 22:40:38 dev-master hyperkube[2763]: E0708 22:40:38.555379 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: services is forbidden: User "kubelet" cannot list services at the cluster scope
Jul 08 22:40:38 dev-master hyperkube[2763]: E0708 22:40:38.738411 2763 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"k8s-master-10.50.50.201.153f85e7528e5906", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"k8s-master-10.50.50.201", UID:"b0ed63e50c3259666286e5a788d12b81", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{scheduler}"}, Reason:"Started", Message:"Started container", Source:v1.EventSource{Component:"kubelet", Host:"10.50.50.201"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbec8c296b58a5506, ext:106413065445, loc:(*time.Location)(0xb58e300)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbec8c296b58a5506, ext:106413065445, loc:(*time.Location)(0xb58e300)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "kubelet" cannot create events in the namespace "kube-system"' (will not retry!)
Jul 08 22:40:43 dev-master hyperkube[2763]: E0708 22:40:43.938806 2763 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jul 08 22:40:44 dev-master hyperkube[2763]: E0708 22:40:44.556337 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: nodes is forbidden: User "kubelet" cannot list nodes at the cluster scope
Jul 08 22:40:44 dev-master hyperkube[2763]: E0708 22:40:44.557513 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "kubelet" cannot list pods at the cluster scope
Jul 08 22:40:44 dev-master hyperkube[2763]: E0708 22:40:44.561007 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: services is forbidden: User "kubelet" cannot list services at the cluster scope
Jul 08 22:40:45 dev-master hyperkube[2763]: E0708 22:40:45.557584 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: nodes is forbidden: User "kubelet" cannot list nodes at the cluster scope
Jul 08 22:40:45 dev-master hyperkube[2763]: E0708 22:40:45.558375 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: pods is forbidden: User "kubelet" cannot list pods at the cluster scope
Jul 08 22:40:45 dev-master hyperkube[2763]: E0708 22:40:45.561807 2763 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: services is forbidden: User "kubelet" cannot list services at the cluster scope
Jul 08 22:40:46 dev-master hyperkube[2763]: I0708 22:40:46.393905 2763 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach
Jul 08 22:40:46 dev-master hyperkube[2763]: I0708 22:40:46.396261 2763 kubelet_node_status.go:83] Attempting to register node 10.50.50.201
Jul 08 22:40:46 dev-master hyperkube[2763]: E0708 22:40:46.397540 2763 kubelet_node_status.go:107] Unable to register node "10.50.50.201" with API server: nodes is forbidden: User "kubelet" cannot create nodes at the cluster scope
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.161949 9689 cni.go:259] Error adding network: no configured Calico pools
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.161980 9689 cni.go:227] Error while adding to cni network: no configured Calico pools
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.468392 9689 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "kube-dns-splct_kube-system" network: no configured Calico
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.468455 9689 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-dns-splct_kube-system(113e64b2-82e6-11e8-83bb-0242a9e42805)" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.468479 9689 kuberuntime_manager.go:646] createPodSandbox for pod "kube-dns-splct_kube-system(113e64b2-82e6-11e8-83bb-0242a9e42805)" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.468556 9689 pod_workers.go:186] Error syncing pod 113e64b2-82e6-11e8-83bb-0242a9e42805 ("kube-dns-splct_kube-system(113e64b2-82e6-11e8-83bb-0242a9e42805)"), skipping: failed to "CreatePodSandbox" for "kube-d
Jul 08 19:43:48 dev-master hyperkube[9689]: I0708 19:43:48.938222 9689 kuberuntime_manager.go:513] Container {Name:calico-node Image:ibmcom/calico-node:v3.0.4 Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:ETCD_ENDPOINTS Value: ValueFrom:&EnvVarSource
Jul 08 19:43:48 dev-master hyperkube[9689]: e:FELIX_HEALTHENABLED Value:true ValueFrom:nil} {Name:IP_AUTODETECTION_METHOD Value:can-reach=10.50.50.201 ValueFrom:nil}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:lib-modules ReadOnly:true MountPath:/lib/m
Jul 08 19:43:48 dev-master hyperkube[9689]: I0708 19:43:48.938449 9689 kuberuntime_manager.go:757] checking backoff for container "calico-node" in pod "calico-node-wpln7_kube-system(10107b3e-82e6-11e8-83bb-0242a9e42805)"
Jul 08 19:43:48 dev-master hyperkube[9689]: I0708 19:43:48.938699 9689 kuberuntime_manager.go:767] Back-off 5m0s restarting failed container=calico-node pod=calico-node-wpln7_kube-system(10107b3e-82e6-11e8-83bb-0242a9e42805)
Jul 08 19:43:48 dev-master hyperkube[9689]: E0708 19:43:48.938735 9689 pod_workers.go:186] Error syncing pod 10107b3e-82e6-11e8-83bb-0242a9e42805 ("calico-node-wpln7_kube-system(10107b3e-82e6-11e8-83bb-0242a9e42805)"), skipping: failed to "StartContainer" for "calic
lines 4918-4962/4962 (END)
docker ps (master node): Container-> k8s_POD_kube-dns-splct_kube-system-* is repeatedly crashing.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ed24d636fdd1 ibmcom/pause:3.0 "/pause" 1 second ago Up Less than a second k8s_POD_kube-dns-splct_kube-system_113e64b2-82e6-11e8-83bb-0242a9e42805_121
49b648837900 ibmcom/calico-cni "/install-cni.sh" 5 minutes ago Up 5 minutes k8s_install-cni_calico-node-wpln7_kube-system_10107b3e-82e6-11e8-83bb-0242a9e42805_0
933ff30177de ibmcom/calico-kube-controllers "/usr/bin/kube-contr…" 5 minutes ago Up 5 minutes k8s_calico-kube-controllers_calico-kube-controllers-759f7fc556-mm5tg_kube-system_1010712e-82e6-11e8-83bb-0242a9e42805_0
12e9262299af ibmcom/pause:3.0 "/pause" 6 minutes ago Up 5 minutes k8s_POD_calico-kube-controllers-759f7fc556-mm5tg_kube-system_1010712e-82e6-11e8-83bb-0242a9e42805_0
8dcb2b2b3eb5 ibmcom/pause:3.0 "/pause" 6 minutes ago Up 5 minutes k8s_POD_calico-node-wpln7_kube-system_10107b3e-82e6-11e8-83bb-0242a9e42805_0
9486ff78df31 ibmcom/tiller "/tiller" 6 minutes ago Up 6 minutes k8s_tiller_tiller-deploy-c59888d97-7nwph_kube-system_016019ab-82e6-11e8-83bb-0242a9e42805_0
e5588f68af1b ibmcom/pause:3.0 "/pause" 6 minutes ago Up 6 minutes k8s_POD_tiller-deploy-c59888d97-7nwph_kube-system_016019ab-82e6-11e8-83bb-0242a9e42805_0
e80460d857ff ibmcom/icp-image-manager "/icp-image-manager …" 10 minutes ago Up 10 minutes k8s_image-manager_image-manager-0_kube-system_7b7554ce-82e5-11e8-83bb-0242a9e42805_0
e207175f19b7 ibmcom/registry "/entrypoint.sh /etc…" 10 minutes ago Up 10 minutes k8s_icp-registry_image-manager-0_kube-system_7b7554ce-82e5-11e8-83bb-0242a9e42805_0
477faf0668f3 ibmcom/pause:3.0 "/pause" 10 minutes ago Up 10 minutes k8s_POD_image-manager-0_kube-system_7b7554ce-82e5-11e8-83bb-0242a9e42805_0
8996bb8c37b7 d4b6454d4873 "/hyperkube schedule…" 10 minutes ago Up 10 minutes k8s_scheduler_k8s-master-10.50.50.201_kube-system_9e5bce1f08c050be21fa6380e4e363cc_0
835ee941432c d4b6454d4873 "/hyperkube apiserve…" 10 minutes ago Up 10 minutes k8s_apiserver_k8s-master-10.50.50.201_kube-system_9e5bce1f08c050be21fa6380e4e363cc_0
de409ff63cb2 d4b6454d4873 "/hyperkube controll…" 10 minutes ago Up 10 minutes k8s_controller-manager_k8s-master-10.50.50.201_kube-system_9e5bce1f08c050be21fa6380e4e363cc_0
716032a308ea ibmcom/pause:3.0 "/pause" 10 minutes ago Up 10 minutes k8s_POD_k8s-master-10.50.50.201_kube-system_9e5bce1f08c050be21fa6380e4e363cc_0
bd9e64e3d6a2 d4b6454d4873 "/hyperkube proxy --…" 12 minutes ago Up 12 minutes k8s_proxy_k8s-proxy-10.50.50.201_kube-system_3e068267cfe8f990cd2c9a4635be044d_0
bab3c9ef7e40 ibmcom/pause:3.0 "/pause" 12 minutes ago Up 12 minutes k8s_POD_k8s-proxy-10.50.50.201_kube-system_3e068267cfe8f990cd2c9a4635be044d_0
Kubectl (master node): I believe kube should have been initialized and running by this time but seems like it is not.
kubectl get pods -s 127.0.0.1:8888 --all-namespaces
The connection to the server 127.0.0.1:8888 was refused - did you specify the right host or port?
Following are the options I tried:
Create cluster with both IP_IP enabled and disabled. As all
nodes are on same subnet, IP_IP setup should not have impact.
Etcd running on a separate node and as part of master node
ifconfig tunl0 returns following (i.e. w/o IP assignment) in all of the above scenarios :
tunl0 Link encap:IPIP Tunnel HWaddr
NOARP MTU:1480 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
'calicoctl get profile' returns empty and so does 'calicoctl get nodes' which I believe is because, calico is not configured yet.
Other checks, thoughts and options?
Calico Kube Contoller Logs (repeated):
2018-07-09 05:46:08.440 [WARNING][1] cache.go 278: Value for key has changed, queueing update to reprogram key="kns.default" type=v3.Profile
2018-07-09 05:46:08.440 [WARNING][1] cache.go 278: Value for key has changed, queueing update to reprogram key="kns.kube-public" type=v3.Profile
2018-07-09 05:46:08.440 [WARNING][1] cache.go 278: Value for key has changed, queueing update to reprogram key="kns.kube-system" type=v3.Profile
2018-07-09 05:46:08.440 [INFO][1] namespace_controller.go 223: Create/Update Profile in Calico datastore key="kns.default"
2018-07-09 05:46:08.441 [INFO][1] namespace_controller.go 246: Update Profile in Calico datastore with resource version key="kns.default"
2018-07-09 05:46:08.442 [INFO][1] namespace_controller.go 252: Successfully updated profile key="kns.default"
2018-07-09 05:46:08.442 [INFO][1] namespace_controller.go 223: Create/Update Profile in Calico datastore key="kns.kube-public"
2018-07-09 05:46:08.446 [INFO][1] namespace_controller.go 246: Update Profile in Calico datastore with resource version key="kns.kube-public"
2018-07-09 05:46:08.447 [INFO][1] namespace_controller.go 252: Successfully updated profile key="kns.kube-public"
2018-07-09 05:46:08.447 [INFO][1] namespace_controller.go 223: Create/Update Profile in Calico datastore key="kns.kube-system"
2018-07-09 05:46:08.465 [INFO][1] namespace_controller.go 246: Update Profile in Calico datastore with resource version key="kns.kube-system"
2018-07-09 05:46:08.476 [INFO][1] namespace_controller.go 252: Successfully updated profile key="kns.kube-system"
Firstly, from ICP 2.1.0.3, the insecure port 8888 for K8S apiserver is disabled, so you can not use this insecure port to talk to Kubenetes.
For this issue, could you let me know the below information or outputs.
The network configurations in your environment.
-> ifconfig -a
The route table:
-> route
The contents of your /ect/hosts file:
-> cat /etc/hosts
The ICP installation configurations files.
-> config.yaml & hosts
Well issue seemed to be with the Docker storage driver (btrfs) that I was using. Once I switched to 'Overlay', I was able to move forward.
I had the same experience, at least the same high level error message:
"FAILED - RETRYING: Waiting for kube-dns to start".
I had to do 2 things to pass this installation step:
change the hostname (and the entry in my /etc/hosts) to lowercase. Calico doesn't like uppercase
comment the localhost entry in /etc/hosts (#127.0.0.1 localhost.localdomain localhost)
After doing it, installation completed fine.

Kubelet Not Starting: Crashing with Exit Status: 255/n/a

Ubuntu 16.04 LTS, Docker 17.12.1, Kubernetes 1.10.0
Kubelet not starting:
Jun 22 06:45:57 dev-master systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Jun 22 06:45:57 dev-master systemd[1]: kubelet.service: Failed with result 'exit-code'.
Note: No issue with v1.9.1
LOGS:
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.518085 20051 docker_service.go:249] Docker Info: &{ID:WDJK:3BCI:BGCM:VNF3:SXGW:XO5G:KJ3Z:EKIH:XGP7:XJGG:LFBL:YWAJ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:btrfs DriverStatus:[[Build Version Btrfs v4.15.1] [Library Vers
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.521232 20051 docker_service.go:262] Setting cgroupDriver to cgroupfs
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.532834 20051 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.533812 20051 kuberuntime_manager.go:186] Container runtime docker initialized, version: 18.05.0-ce, apiVersion: 1.37.0
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.534071 20051 csi_plugin.go:61] kubernetes.io/csi: plugin initializing...
Jun 22 06:45:55 dev-master hyperkube[20051]: W0622 06:45:55.534846 20051 kubelet.go:903] Accelerators feature is deprecated and will be removed in v1.11. Please use device plugins instead. They can be enabled using the DevicePlugins feature gate.
Jun 22 06:45:55 dev-master hyperkube[20051]: W0622 06:45:55.535035 20051 kubelet.go:909] GPU manager init error: couldn't get a handle to the library: unable to open a handle to the library, GPU feature is disabled.
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.535082 20051 server.go:129] Starting to listen on 0.0.0.0:10250
Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.535164 20051 kubelet.go:1282] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.535189 20051 server.go:944] Started kubelet
Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.535555 20051 event.go:209] Unable to write event: 'Post https://10.50.50.201:8001/api/v1/namespaces/default/events: dial tcp 10.50.50.201:8001: getsockopt: connection refused' (may retry after sleeping)
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.535825 20051 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536202 20051 status_manager.go:140] Starting to sync pod status with apiserver
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536253 20051 kubelet.go:1782] Starting kubelet main sync loop.
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536285 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536464 20051 volume_manager.go:247] Starting Kubelet Volume Manager
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.536613 20051 desired_state_of_world_populator.go:129] Desired state populator starts to run
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.538574 20051 server.go:299] Adding debug handlers to kubelet server.
Jun 22 06:45:55 dev-master hyperkube[20051]: W0622 06:45:55.538664 20051 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.539199 20051 kubelet.go:2130] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.636465 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down]
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.636795 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.638630 20051 kubelet_node_status.go:83] Attempting to register node 10.50.50.201
Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.638954 20051 kubelet_node_status.go:107] Unable to register node "10.50.50.201" with API server: Post https://10.50.50.201:8001/api/v1/nodes: dial tcp 10.50.50.201:8001: getsockopt: connection refused
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.836686 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down]
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.839219 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach
Jun 22 06:45:55 dev-master hyperkube[20051]: I0622 06:45:55.841028 20051 kubelet_node_status.go:83] Attempting to register node 10.50.50.201
Jun 22 06:45:55 dev-master hyperkube[20051]: E0622 06:45:55.841357 20051 kubelet_node_status.go:107] Unable to register node "10.50.50.201" with API server: Post https://10.50.50.201:8001/api/v1/nodes: dial tcp 10.50.50.201:8001: getsockopt: connection refused
Jun 22 06:45:56 dev-master hyperkube[20051]: I0622 06:45:56.236826 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down]
Jun 22 06:45:56 dev-master hyperkube[20051]: I0622 06:45:56.241590 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach
Jun 22 06:45:56 dev-master hyperkube[20051]: I0622 06:45:56.245081 20051 kubelet_node_status.go:83] Attempting to register node 10.50.50.201
Jun 22 06:45:56 dev-master hyperkube[20051]: E0622 06:45:56.245475 20051 kubelet_node_status.go:107] Unable to register node "10.50.50.201" with API server: Post https://10.50.50.201:8001/api/v1/nodes: dial tcp 10.50.50.201:8001: getsockopt: connection refused
Jun 22 06:45:56 dev-master hyperkube[20051]: E0622 06:45:56.492206 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://10.50.50.201:8001/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: connection refused
Jun 22 06:45:56 dev-master hyperkube[20051]: E0622 06:45:56.493216 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.50.50.201:8001/api/v1/pods?fieldSelector=spec.nodeName%3D10.50.50.201&limit=500&resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: co
Jun 22 06:45:56 dev-master hyperkube[20051]: E0622 06:45:56.494240 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://10.50.50.201:8001/api/v1/nodes?fieldSelector=metadata.name%3D10.50.50.201&limit=500&resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: connecti
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.036893 20051 kubelet.go:1799] skipping pod synchronization - [container runtime is down]
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.045705 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.047489 20051 kubelet_node_status.go:83] Attempting to register node 10.50.50.201
Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.047787 20051 kubelet_node_status.go:107] Unable to register node "10.50.50.201" with API server: Post https://10.50.50.201:8001/api/v1/nodes: dial tcp 10.50.50.201:8001: getsockopt: connection refused
Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.413319 20051 event.go:209] Unable to write event: 'Post https://10.50.50.201:8001/api/v1/namespaces/default/events: dial tcp 10.50.50.201:8001: getsockopt: connection refused' (may retry after sleeping)
Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.492781 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://10.50.50.201:8001/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: connection refused
Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.493560 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.50.50.201:8001/api/v1/pods?fieldSelector=spec.nodeName%3D10.50.50.201&limit=500&resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: co
Jun 22 06:45:57 dev-master hyperkube[20051]: E0622 06:45:57.494574 20051 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://10.50.50.201:8001/api/v1/nodes?fieldSelector=metadata.name%3D10.50.50.201&limit=500&resourceVersion=0: dial tcp 10.50.50.201:8001: getsockopt: connecti
Jun 22 06:45:57 dev-master hyperkube[20051]: W0622 06:45:57.549477 20051 manager.go:340] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.659932 20051 kubelet_node_status.go:289] Setting node annotation to enable volume controller attach/detach
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.661447 20051 cpu_manager.go:155] [cpumanager] starting with none policy
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.661459 20051 cpu_manager.go:156] [cpumanager] reconciling every 10s
Jun 22 06:45:57 dev-master hyperkube[20051]: I0622 06:45:57.661468 20051 policy_none.go:42] [cpumanager] none policy: Start
Jun 22 06:45:57 dev-master hyperkube[20051]: W0622 06:45:57.661523 20051 fs.go:539] stat failed on /dev/loop10 with error: no such file or directory
Jun 22 06:45:57 dev-master hyperkube[20051]: F0622 06:45:57.661535 20051 kubelet.go:1359] Failed to start ContainerManager failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 126 in cached partitions map
Jun 22 06:45:57 dev-master systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Jun 22 06:45:57 dev-master systemd[1]: kubelet.service: Failed with result 'exit-code'.
Run the following command on all your nodes. It worked for me.
swapoff -a
I have found a lot of the same error messages in your logs:
dial tcp 10.50.50.201:8001: getsockopt: connection refused
There may be several problems:
IP address and/or Port are incorrect
no access from the Worker to the Master
something wrong with your Master, for example, kube-apiserver is down
You should look in that direction.
user1188867's answer is definitely correct.
I want to add a piece of information for further reference for those not using Ubuntu.
I hit this on a cluster with Clear Linux on bare metal. I'm attaching here instructions on how to detect the issue in such environment and disable swap to solve.
First, launching sudo systemctl status kubelet after reboot produces the following:
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─0-cni.conf, 0-containerd.conf, 10-cgroup-driver.conf
Active: activating (auto-restart) (Result: exit-code) since Thu 2020-12-17 11:04:37 CET; 2s ago
Docs: http://kubernetes.io/docs/
Process: 2404 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, st>
Process: 2405 ExecStartPost=/usr/bin/kubelet-version-check.sh store (code=exited, status=0/SUCCESS)
Main PID: 2404 (code=exited, status=255/EXCEPTION)
CPU: 683ms
The issue was actually the existence of the swap file.
To disable it:
add the nofail option to /usr/lib/systemd/system/var-swapfile.swap. I personally used the following one-liner: sudo sed -i s/Options=/Options=nofail,/ /usr/lib/systemd/system/var-swapfile.swap
Disable swapping: sudo swapoff -a
Delete the swap file: sudo rm /var/swapfile
This procedure on Clear Linux persists the deactivation of the swap on reboots.
I had the same exit status, but my kubelet failed to start due to a limitation on the number of max_user_watches. The following got the kubelet working again
https://github.com/google/cadvisor/issues/1581#issuecomment-367616070
This problem can also arise if the service is not enabled after the Docker installation. After the next reboot, Docker is not started and kubelet cannot start either. So don't forget after the Docker installation:
sudo systemctl enable docker

kubeadm init 1.9 hangs with vsphere cloud provider

kubeadm init seems to be hanging when I started using vsphere cloud provider. Followed instructions from here - Anybody got it working with 1.9?
root#master-0:~# kubeadm init --config /tmp/kube.yaml
[init] Using Kubernetes version: v1.9.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING Hostname]: hostname "master-0" could not be reached
[WARNING Hostname]: hostname "master-0" lookup master-0 on 8.8.8.8:53: no such host
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master-0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.11.0.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
Master os details
root#master-0:~# uname -r
4.4.0-21-generic
root#master-0:~# docker version
Client:
Version: 17.03.2-ce
API version: 1.27
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 03:35:14 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.2-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 03:35:14 2017
OS/Arch: linux/amd64
Experimental: false
root#master-0:~# cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04 LTS (Xenial Xerus)"
ID=ubuntu
kubelet service seems to be running fine
root#master-0:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf, 90-local-extras.conf
Active: active (running) since Mon 2018-01-22 11:25:00 UTC; 13min ago
Docs: http://kubernetes.io/docs/
Main PID: 4270 (kubelet)
Tasks: 13 (limit: 512)
Memory: 37.6M
CPU: 11.626s
CGroup: /system.slice/kubelet.service
└─4270 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeco
nfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10
--cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.cr
t --cadvisor-port=0 --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki
journalctl -f -u kubelet has some connection refused errors which probably networking service is missing. Those errors should go away when networking service is installed post kubeadm init
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.759764 1184 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.761350 1184 controller.go:114] kubelet confi
g controller: starting controller
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.762632 1184 controller.go:118] kubelet confi
g controller: validating combination of defaults and flags
Jan 22 11:17:46 localhost systemd[1]: Started Kubernetes systemd probe.
Jan 22 11:17:46 localhost kubelet[1184]: W0122 11:17:46.070619 1184 cni.go:171] Unable to update cni
config: No networks found in /etc/cni/net.d
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.081384 1184 server.go:182] Version: v1.9.1
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.081417 1184 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.082271 1184 plugins.go:101] No cloud provide
r specified.
Jan 22 11:17:46 localhost kubelet[1184]: error: failed to run Kubelet: unable to load bootstrap kubecon
fig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILU
RE
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Unit entered failed state.
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 22 11:17:56 localhost systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Jan 22 11:17:56 localhost systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jan 22 11:17:56 localhost systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.410883 1229 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.411198 1229 controller.go:114] kubelet confi
g controller: starting controller
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.411353 1229 controller.go:118] kubelet confi
g controller: validating combination of defaults and flags
Jan 22 11:17:56 localhost systemd[1]: Started Kubernetes systemd probe.
Jan 22 11:17:56 localhost kubelet[1229]: W0122 11:17:56.424264 1229 cni.go:171] Unable to update cni
config: No networks found in /etc/cni/net.d
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429102 1229 server.go:182] Version: v1.9.1
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429156 1229 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429247 1229 plugins.go:101] No cloud provide
r specified.
Jan 22 11:17:56 localhost kubelet[1229]: E0122 11:17:56.461608 1229 certificate_manager.go:314] Fail
ed while requesting a signed certificate from the master: cannot create certificate signing request: Po
st https://10.11.0.101:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: dial tcp 10.11
.0.101:6443: getsockopt: connection refused
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.491374 1229 server.go:428] --cgroups-per-qos
enabled, but --cgroup-root was not specified. defaulting to /
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.492069 1229 container_manager_linux.go:242]
container manager verified user specified cgroup-root exists: /
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.492102 1229 container_manager_linux.go:247]
Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: Kubelet
CgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootD
ir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemRe
servedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvict
ionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeri
od:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1
} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Pe
rcentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quan
tity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] Experiment
alCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
docker ps, controller & scheduler logs
root#master-0:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6db549891439 677911f7ae8f "kube-scheduler --..." About an hour ago Up About an hour k8s_kube-scheduler_kube-scheduler-master-0_kube-system_df32e281019039e73be77e3f53d09596_0
4f7ddefbd86e 4978f9a64966 "kube-controller-m..." About an hour ago Up About an hour k8s_kube-controller-manager_kube-controller-manager-master-0_kube-system_34bad395be69e74db6304d6c4218c536_0
18604db89db6 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-scheduler-master-0_kube-system_df32e281019039e73be77e3f53d09596_0
252b86ea4b5e gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-controller-manager-master-0_kube-system_34bad395be69e74db6304d6c4218c536_0
4021061bf8a6 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-apiserver-master-0_kube-system_7a3ae9279d0ca7b4ada8333fbe7442ed_0
4f94163d313b gcr.io/google_containers/etcd-amd64:3.1.10 "etcd --name=etcd0..." About an hour ago Up About an hour 0.0.0.0:2379-2380->2379-2380/tcp, 0.0.0.0:4001->4001/tcp, 7001/tcp etcd
root#master-0:~# docker logs -f 4f7ddefbd86e
I0122 11:25:06.253706 1 controllermanager.go:108] Version: v1.9.1
I0122 11:25:06.258712 1 leaderelection.go:174] attempting to acquire leader lease...
E0122 11:25:06.259448 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:09.711377 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:13.969270 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:17.564964 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:20.616174 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
root#master-0:~# docker logs -f 6db549891439
W0122 11:25:06.285765 1 server.go:159] WARNING: all flags than --config are deprecated. Please begin using a config file ASAP.
I0122 11:25:06.292865 1 server.go:551] Version: v1.9.1
I0122 11:25:06.295776 1 server.go:570] starting healthz server on 127.0.0.1:10251
E0122 11:25:06.295947 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: Get https://10.11.0.101:6443/apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296027 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: Get https://10.11.0.101:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296092 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: Get https://10.11.0.101:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296160 1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: Get https://10.11.0.101:6443/api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296218 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: Get https://10.11.0.101:6443/apis/apps/v1beta1/statefulsets?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.297374 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: Get https://10.11.0.101:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
There was a bug in the controller manager when starting with the vsphere cloud provider. See https://github.com/kubernetes/kubernetes/issues/57279, fixed in 1.9.2