Kubectl is not able to get cluster info from minikube - kubernetes

I am new to Kubernetes and trying out the minikube tutorial. I have a Mac and I have installed minikube, kubectl cli and hyperkit driver. Docker daemon has been running. I have started minikube by passing the proxy variables.
minikube start --vm-driver=hyperkit \
--docker-env HTTP_PROXY=http://my-http-proxy-host:my-http-proxy-port \
--docker-env HTTPS_PROXY=https://my-https-proxy-host:my-https-proxy-port
I set kubectl to use minikube context.
However when I run the:
kubectl cluster-info
I am getting the below error:
Unable to connect to the server: net/http: TLS handshake timeout
Below is the output of minikube logs | grep error:
Oct 21 16:53:22 minikube localkube[3010]: E1021 16:53:22.531735 3010 proxier.go:1701] Failed to delete stale service IP 10.96.0.10 connections, error: error deleting connection tracking state for UDP service IP: 10.96.0.10, error: error looking for path of conntrack: exec: "conntrack": executable file not found in $PATH
Oct 21 16:53:26 minikube localkube[3010]: E1021 16:53:26.781082 3010 proxier.go:964] Failed to delete kube-system/kube-dns:dns endpoint connections, error: error deleting conntrack entries for UDP peer {10.96.0.10, 172.17.0.3}, error: error looking for path of conntrack: exec: "conntrack": executable file not found in $PATH
Oct 21 16:53:37 minikube localkube[3010]: E1021 16:53:37.528164 3010 proxier.go:1701] Failed to delete stale service IP 10.96.0.10 connections, error: error deleting connection tracking state for UDP service IP: 10.96.0.10, error: error looking for path of conntrack: exec: "conntrack": executable file not found in $PATH
Oct 23 01:45:00 minikube localkube[3010]: E1023 01:45:00.513057 3010 remote_runtime.go:115] StopPodSandbox "9aad7169cf8c357f512575118efdcb88fb796bcc7642c44b5f3b79f2310720ff" from runtime service failed: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Oct 23 01:45:00 minikube localkube[3010]: E1023 01:45:00.513654 3010 remote_runtime.go:115] StopPodSandbox "172f08e151ec2a7b12132bb65917b7d993cbba9978f4ff5d6f272fba132317d0" from runtime service failed: rpc error: code = Unknown desc = [failed to get checkpoint for sandbox "172f08e151ec2a7b12132bb65917b7d993cbba9978f4ff5d6f272fba132317d0": key is not found, failed to get sandbox status: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]
Oct 23 01:45:00 minikube localkube[3010]: E1023 01:45:00.513758 3010 kuberuntime_manager.go:595] killPodWithSyncResult failed: failed to "KillPodSandbox" for "22d0437c-d333-11e8-b4fc-0800277656c6" with KillPodSandboxError: "rpc error: code = Unknown desc = [failed to get checkpoint for sandbox \"172f08e151ec2a7b12132bb65917b7d993cbba9978f4ff5d6f272fba132317d0\": key is not found, failed to get sandbox status: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
Oct 23 01:45:00 minikube localkube[3010]: E1023 01:45:00.513805 3010 pod_workers.go:186] Error syncing pod 22d0437c-d333-11e8-b4fc-0800277656c6 ("storage-provisioner_kube-system(22d0437c-d333-11e8-b4fc-0800277656c6)"), skipping: failed to "KillPodSandbox" for "22d0437c-d333-11e8-b4fc-0800277656c6" with KillPodSandboxError: "rpc error: code = Unknown desc = [failed to get checkpoint for sandbox \"172f08e151ec2a7b12132bb65917b7d993cbba9978f4ff5d6f272fba132317d0\": key is not found, failed to get sandbox status: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
Oct 23 01:45:00 minikube localkube[3010]: E1023 01:45:00.549309 3010 remote_runtime.go:229] StopContainer "7058c1679128f30650a384e0ce3cfe31eff7fb2fbcd144b20fd26e1c94a1d61b" from runtime service failed: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Oct 23 01:45:00 minikube localkube[3010]: E1023 01:45:00.550191 3010 kuberuntime_container.go:604] Container "docker://7058c1679128f30650a384e0ce3cfe31eff7fb2fbcd144b20fd26e1c94a1d61b" termination failed with gracePeriod 30: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Oct 23 01:45:00 minikube localkube[3010]: E1023 01:45:00.549603 3010 remote_runtime.go:229] StopContainer "0837dc4d55ea00fe3367fce3b94e5a1baf5a2ed7f1affe4f315312bf94c68a21" from runtime service failed: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Oct 23 01:45:00 minikube localkube[3010]: E1023 01:45:00.554796 3010 kuberuntime_container.go:604] Container "docker://0837dc4d55ea00fe3367fce3b94e5a1baf5a2ed7f1affe4f315312bf94c68a21" termination failed with gracePeriod 30: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Oct 23 01:45:00 minikube localkube[3010]: E1023 01:45:00.556955 3010 remote_runtime.go:115] StopPodSandbox "69b05b274f31ee80f9e1e732e51d2a1859c81a9deaa913e54dcfafe891340acc" from runtime service failed: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Oct 23 01:45:00 minikube localkube[3010]: E1023 01:45:00.561291 3010 remote_runtime.go:115] StopPodSandbox "8536992fb81fc810a29b0fa41733a960b5ad422e0b5c8a2a206c1f8d165bc6ec" from runtime service failed: rpc error: code = Unknown desc = [failed to get checkpoint for sandbox "8536992fb81fc810a29b0fa41733a960b5ad422e0b5c8a2a206c1f8d165bc6ec": key is not found, failed to get sandbox status: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]
Oct 23 01:45:00 minikube localkube[3010]: E1023 01:45:00.561411 3010 kuberuntime_manager.go:595] killPodWithSyncResult failed: [failed to "KillContainer" for "kubedns" with KillContainerError: "rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
Oct 23 01:45:00 minikube localkube[3010]: , failed to "KillContainer" for "dnsmasq" with KillContainerError: "rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
Oct 23 01:45:00 minikube localkube[3010]: , failed to "KillPodSandbox" for "23825d5c-d333-11e8-b4fc-0800277656c6" with KillPodSandboxError: "rpc error: code = Unknown desc = [failed to get checkpoint for sandbox \"8536992fb81fc810a29b0fa41733a960b5ad422e0b5c8a2a206c1f8d165bc6ec\": key is not found, failed to get sandbox status: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
Oct 23 01:45:00 minikube localkube[3010]: E1023 01:45:00.561442 3010 pod_workers.go:186] Error syncing pod 23825d5c-d333-11e8-b4fc-0800277656c6 ("kube-dns-54cccfbdf8-wk7v5_kube-system(23825d5c-d333-11e8-b4fc-0800277656c6)"), skipping: [failed to "KillContainer" for "kubedns" with KillContainerError: "rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
Oct 23 01:45:00 minikube localkube[3010]: , failed to "KillContainer" for "dnsmasq" with KillContainerError: "rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
Oct 23 01:45:00 minikube localkube[3010]: , failed to "KillPodSandbox" for "23825d5c-d333-11e8-b4fc-0800277656c6" with KillPodSandboxError: "rpc error: code = Unknown desc = [failed to get checkpoint for sandbox \"8536992fb81fc810a29b0fa41733a960b5ad422e0b5c8a2a206c1f8d165bc6ec\": key is not found, failed to get sandbox status: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
Oct 23 01:45:01 minikube localkube[3010]: E1023 01:45:01.607038 3010 remote_runtime.go:229] StopContainer "7058c1679128f30650a384e0ce3cfe31eff7fb2fbcd144b20fd26e1c94a1d61b" from runtime service failed: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Oct 23 01:45:01 minikube localkube[3010]: E1023 01:45:01.607075 3010 kuberuntime_container.go:604] Container "docker://7058c1679128f30650a384e0ce3cfe31eff7fb2fbcd144b20fd26e1c94a1d61b" termination failed with gracePeriod 30: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Oct 23 01:45:01 minikube localkube[3010]: E1023 01:45:01.607162 3010 remote_runtime.go:229] StopContainer "0837dc4d55ea00fe3367fce3b94e5a1baf5a2ed7f1affe4f315312bf94c68a21" from runtime service failed: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Oct 23 01:45:01 minikube localkube[3010]: E1023 01:45:01.607177 3010 kuberuntime_container.go:604] Container "docker://0837dc4d55ea00fe3367fce3b94e5a1baf5a2ed7f1affe4f315312bf94c68a21" termination failed with gracePeriod 30: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Oct 23 01:45:01 minikube localkube[3010]: E1023 01:45:01.607527 3010 remote_runtime.go:115] StopPodSandbox "69b05b274f31ee80f9e1e732e51d2a1859c81a9deaa913e54dcfafe891340acc" from runtime service failed: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Oct 23 01:45:01 minikube localkube[3010]: E1023 01:45:01.607907 3010 remote_runtime.go:115] StopPodSandbox "8536992fb81fc810a29b0fa41733a960b5ad422e0b5c8a2a206c1f8d165bc6ec" from runtime service failed: rpc error: code = Unknown desc = [failed to get checkpoint for sandbox "8536992fb81fc810a29b0fa41733a960b5ad422e0b5c8a2a206c1f8d165bc6ec": key is not found, failed to get sandbox status: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]
Oct 23 01:45:01 minikube localkube[3010]: E1023 01:45:01.608521 3010 kuberuntime_manager.go:595] killPodWithSyncResult failed: [failed to "KillContainer" for "kubedns" with KillContainerError: "rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
Oct 23 01:45:01 minikube localkube[3010]: , failed to "KillContainer" for "dnsmasq" with KillContainerError: "rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
Oct 23 01:45:01 minikube localkube[3010]: , failed to "KillPodSandbox" for "23825d5c-d333-11e8-b4fc-0800277656c6" with KillPodSandboxError: "rpc error: code = Unknown desc = [failed to get checkpoint for sandbox \"8536992fb81fc810a29b0fa41733a960b5ad422e0b5c8a2a206c1f8d165bc6ec\": key is not found, failed to get sandbox status: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
Oct 23 01:45:01 minikube localkube[3010]: E1023 01:45:01.608574 3010 pod_workers.go:186] Error syncing pod 23825d5c-d333-11e8-b4fc-0800277656c6 ("kube-dns-54cccfbdf8-wk7v5_kube-system(23825d5c-d333-11e8-b4fc-0800277656c6)"), skipping: [failed to "KillContainer" for "kubedns" with KillContainerError: "rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
Oct 23 01:45:01 minikube localkube[3010]: , failed to "KillContainer" for "dnsmasq" with KillContainerError: "rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
Oct 23 01:45:01 minikube localkube[3010]: , failed to "KillPodSandbox" for "23825d5c-d333-11e8-b4fc-0800277656c6" with KillPodSandboxError: "rpc error: code = Unknown desc = [failed to get checkpoint for sandbox \"8536992fb81fc810a29b0fa41733a960b5ad422e0b5c8a2a206c1f8d165bc6ec\": key is not found, failed to get sandbox status: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
Oct 23 01:45:01 minikube localkube[3010]: E1023 01:45:01.683566 3010 remote_runtime.go:169] ListPodSandbox with filter &PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},} from runtime service failed: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Oct 23 01:45:01 minikube localkube[3010]: E1023 01:45:01.683676 3010 kuberuntime_sandbox.go:192] ListPodSandbox failed: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Oct 23 01:45:01 minikube localkube[3010]: E1023 01:45:01.683710 3010 kubelet.go:1929] Failed cleaning pods: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Oct 23 01:45:05 minikube localkube[3010]: E1023 01:45:05.299705 3010 proxier.go:964] Failed to delete kube-system/kube-dns:dns endpoint connections, error: error deleting conntrack entries for UDP peer {10.96.0.10, 172.17.0.3}, error: error looking for path of conntrack: exec: "conntrack": executable file not found in $PATH
Oct 23 01:45:09 minikube localkube[3010]: E1023 01:45:09.791294 3010 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-addon-manager-minikube": Error response from daemon: transport is closing
Oct 23 01:45:09 minikube localkube[3010]: E1023 01:45:09.791344 3010 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-addon-manager-minikube_kube-system(c4c3188325a93a2d7fb1714e1abf1259)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-addon-manager-minikube": Error response from daemon: transport is closing
Oct 23 01:45:09 minikube localkube[3010]: E1023 01:45:09.791356 3010 kuberuntime_manager.go:647] createPodSandbox for pod "kube-addon-manager-minikube_kube-system(c4c3188325a93a2d7fb1714e1abf1259)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-addon-manager-minikube": Error response from daemon: transport is closing
Oct 23 01:45:09 minikube localkube[3010]: E1023 01:45:09.791399 3010 pod_workers.go:186] Error syncing pod c4c3188325a93a2d7fb1714e1abf1259 ("kube-addon-manager-minikube_kube-system(c4c3188325a93a2d7fb1714e1abf1259)"), skipping: failed to "CreatePodSandbox" for "kube-addon-manager-minikube_kube-system(c4c3188325a93a2d7fb1714e1abf1259)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-addon-manager-minikube_kube-system(c4c3188325a93a2d7fb1714e1abf1259)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"kube-addon-manager-minikube\": Error response from daemon: transport is closing"
Oct 23 01:45:10 minikube localkube[3010]: E1023 01:45:10.055640 3010 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-dns-54cccfbdf8-wk7v5": Error response from daemon: failed to update store for object type *libnetwork.endpoint: open : no such file or directory
Oct 23 01:45:10 minikube localkube[3010]: E1023 01:45:10.055893 3010 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-dns-54cccfbdf8-wk7v5_kube-system(23825d5c-d333-11e8-b4fc-0800277656c6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-dns-54cccfbdf8-wk7v5": Error response from daemon: failed to update store for object type *libnetwork.endpoint: open : no such file or directory
Oct 23 01:45:10 minikube localkube[3010]: E1023 01:45:10.056004 3010 kuberuntime_manager.go:647] createPodSandbox for pod "kube-dns-54cccfbdf8-wk7v5_kube-system(23825d5c-d333-11e8-b4fc-0800277656c6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-dns-54cccfbdf8-wk7v5": Error response from daemon: failed to update store for object type *libnetwork.endpoint: open : no such file or directory
Oct 23 01:45:10 minikube localkube[3010]: E1023 01:45:10.056194 3010 pod_workers.go:186] Error syncing pod 23825d5c-d333-11e8-b4fc-0800277656c6 ("kube-dns-54cccfbdf8-wk7v5_kube-system(23825d5c-d333-11e8-b4fc-0800277656c6)"), skipping: failed to "CreatePodSandbox" for "kube-dns-54cccfbdf8-wk7v5_kube-system(23825d5c-d333-11e8-b4fc-0800277656c6)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-54cccfbdf8-wk7v5_kube-system(23825d5c-d333-11e8-b4fc-0800277656c6)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"kube-dns-54cccfbdf8-wk7v5\": Error response from daemon: failed to update store for object type *libnetwork.endpoint: open : no such file or directory"
Oct 23 01:45:10 minikube localkube[3010]: E1023 01:45:10.770086 3010 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-dns-54cccfbdf8-wk7v5": error during connect: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.31/containers/d3249fd7ea20fd2f6331f1cc78f3685b15fc63359992de039188960eafc893cd/start: EOF
Oct 23 01:45:10 minikube localkube[3010]: E1023 01:45:10.770123 3010 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-dns-54cccfbdf8-wk7v5_kube-system(23825d5c-d333-11e8-b4fc-0800277656c6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-dns-54cccfbdf8-wk7v5": error during connect: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.31/containers/d3249fd7ea20fd2f6331f1cc78f3685b15fc63359992de039188960eafc893cd/start: EOF
Oct 23 01:45:10 minikube localkube[3010]: E1023 01:45:10.770133 3010 kuberuntime_manager.go:647] createPodSandbox for pod "kube-dns-54cccfbdf8-wk7v5_kube-system(23825d5c-d333-11e8-b4fc-0800277656c6)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-dns-54cccfbdf8-wk7v5": error during connect: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.31/containers/d3249fd7ea20fd2f6331f1cc78f3685b15fc63359992de039188960eafc893cd/start: EOF
Oct 23 01:45:10 minikube localkube[3010]: E1023 01:45:10.770174 3010 pod_workers.go:186] Error syncing pod 23825d5c-d333-11e8-b4fc-0800277656c6 ("kube-dns-54cccfbdf8-wk7v5_kube-system(23825d5c-d333-11e8-b4fc-0800277656c6)"), skipping: failed to "CreatePodSandbox" for "kube-dns-54cccfbdf8-wk7v5_kube-system(23825d5c-d333-11e8-b4fc-0800277656c6)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-54cccfbdf8-wk7v5_kube-system(23825d5c-d333-11e8-b4fc-0800277656c6)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"kube-dns-54cccfbdf8-wk7v5\": error during connect: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.31/containers/d3249fd7ea20fd2f6331f1cc78f3685b15fc63359992de039188960eafc893cd/start: EOF"
Oct 23 01:45:10 minikube localkube[3010]: E1023 01:45:10.770502 3010 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-addon-manager-minikube": error during connect: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.31/containers/create?name=k8s_POD_kube-addon-manager-minikube_kube-system_c4c3188325a93a2d7fb1714e1abf1259_9: EOF
Oct 23 01:45:10 minikube localkube[3010]: E1023 01:45:10.770566 3010 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-addon-manager-minikube_kube-system(c4c3188325a93a2d7fb1714e1abf1259)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-addon-manager-minikube": error during connect: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.31/containers/create?name=k8s_POD_kube-addon-manager-minikube_kube-system_c4c3188325a93a2d7fb1714e1abf1259_9: EOF
Oct 23 01:45:10 minikube localkube[3010]: E1023 01:45:10.770575 3010 kuberuntime_manager.go:647] createPodSandbox for pod "kube-addon-manager-minikube_kube-system(c4c3188325a93a2d7fb1714e1abf1259)" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod "kube-addon-manager-minikube": error during connect: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.31/containers/create?name=k8s_POD_kube-addon-manager-minikube_kube-system_c4c3188325a93a2d7fb1714e1abf1259_9: EOF
Oct 23 01:45:10 minikube localkube[3010]: E1023 01:45:10.770602 3010 pod_workers.go:186] Error syncing pod c4c3188325a93a2d7fb1714e1abf1259 ("kube-addon-manager-minikube_kube-system(c4c3188325a93a2d7fb1714e1abf1259)"), skipping: failed to "CreatePodSandbox" for "kube-addon-manager-minikube_kube-system(c4c3188325a93a2d7fb1714e1abf1259)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-addon-manager-minikube_kube-system(c4c3188325a93a2d7fb1714e1abf1259)\" failed: rpc error: code = Unknown desc = failed to create a sandbox for pod \"kube-addon-manager-minikube\": error during connect: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.31/containers/create?name=k8s_POD_kube-addon-manager-minikube_kube-system_c4c3188325a93a2d7fb1714e1abf1259_9: EOF"
Oct 23 01:45:23 minikube localkube[3010]: E1023 01:45:23.491216 3010 proxier.go:1701] Failed to delete stale service IP 10.96.0.10 connections, error: error deleting connection tracking state for UDP service IP: 10.96.0.10, error: error looking for path of conntrack: exec: "conntrack": executable file not found in $PATH
Oct 23 01:45:35 minikube localkube[21585]: http: TLS handshake error from 172.17.0.2:37382: remote error: tls: bad certificate
Oct 23 01:45:35 minikube localkube[21585]: http: TLS handshake error from 172.17.0.2:37386: remote error: tls: bad certificate
Oct 23 01:45:35 minikube localkube[21585]: http: TLS handshake error from 127.0.0.1:34088: remote error: tls: bad certificate
Oct 23 01:45:35 minikube localkube[21585]: http: TLS handshake error from 172.17.0.2:37384: remote error: tls: bad certificate
Oct 23 01:45:36 minikube localkube[21585]: http: TLS handshake error from 127.0.0.1:34112: remote error: tls: bad certificate
Oct 23 01:45:36 minikube localkube[21585]: http: TLS handshake error from 172.17.0.2:37414: remote error: tls: bad certificate
Oct 23 01:45:36 minikube localkube[21585]: http: TLS handshake error from 172.17.0.2:37416: remote error: tls: bad certificate
Oct 23 01:45:36 minikube localkube[21585]: http: TLS handshake error from 172.17.0.2:37418: remote error: tls: bad certificate
Oct 23 01:45:37 minikube localkube[21585]: http: TLS handshake error from 127.0.0.1:34120: remote error: tls: bad certificate
Oct 23 01:45:37 minikube localkube[21585]: http: TLS handshake error from 172.17.0.2:37422: remote error: tls: bad certificate
It appears that the minikube had issues. I am getting the same error when I start the minikube without proxy as well.
hyperkit version : hyperkit: v0.20180403-17-g3e954c
minikube version: v0.25.1
kubectl version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-28T15:20:58Z", GoVersion:"go1.11", Compiler:"gc", Platform:"darwin/amd64"}
Ran kubectl -v9 cluster-info and below is the output.
I1023 00:11:12.639212 18611 loader.go:359] Config loaded from file /Users/fna516/.kube/config
I1023 00:11:12.640094 18611 loader.go:359] Config loaded from file /Users/fna516/.kube/config
I1023 00:11:12.641924 18611 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.12.0 (darwin/amd64) kubernetes/0ed3388" 'https://192.168.99.100:8443/api?timeout=32s'
I1023 00:11:22.826784 18611 round_trippers.go:405] GET https://192.168.99.100:8443/api?timeout=32s in 10185 milliseconds
I1023 00:11:22.826808 18611 round_trippers.go:411] Response Headers:
I1023 00:11:22.826900 18611 cached_discovery.go:111] skipped caching discovery info due to Get https://192.168.99.100:8443/api?timeout=32s: net/http: TLS handshake timeout
I1023 00:11:22.827565 18611 loader.go:359] Config loaded from file /Users/fna516/.kube/config
I1023 00:11:22.828233 18611 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.12.0 (darwin/amd64) kubernetes/0ed3388" 'https://192.168.99.100:8443/api?timeout=32s'
I1023 00:11:32.943036 18611 round_trippers.go:405] GET https://192.168.99.100:8443/api?timeout=32s in 10115 milliseconds
I1023 00:11:32.943069 18611 round_trippers.go:411] Response Headers:
I1023 00:11:32.943112 18611 cached_discovery.go:111] skipped caching discovery info due to Get https://192.168.99.100:8443/api?timeout=32s: net/http: TLS handshake timeout
I1023 00:11:32.943161 18611 shortcut.go:89] Error loading discovery information: Get https://192.168.99.100:8443/api?timeout=32s: net/http: TLS handshake timeout
I1023 00:11:32.943340 18611 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.12.0 (darwin/amd64) kubernetes/0ed3388" 'https://192.168.99.100:8443/api?timeout=32s'
I1023 00:11:43.397428 18611 round_trippers.go:405] GET https://192.168.99.100:8443/api?timeout=32s in 10454 milliseconds
I1023 00:11:43.397454 18611 round_trippers.go:411] Response Headers:
I1023 00:11:43.397505 18611 cached_discovery.go:111] skipped caching discovery info due to Get https://192.168.99.100:8443/api?timeout=32s: net/http: TLS handshake timeout
I1023 00:11:43.397619 18611 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.12.0 (darwin/amd64) kubernetes/0ed3388" 'https://192.168.99.100:8443/api?timeout=32s'
I1023 00:11:53.510603 18611 round_trippers.go:405] GET https://192.168.99.100:8443/api?timeout=32s in 10113 milliseconds
I1023 00:11:53.510631 18611 round_trippers.go:411] Response Headers:
I1023 00:11:53.510677 18611 cached_discovery.go:111] skipped caching discovery info due to Get https://192.168.99.100:8443/api?timeout=32s: net/http: TLS handshake timeout
I1023 00:11:53.510771 18611 round_trippers.go:386] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.12.0 (darwin/amd64) kubernetes/0ed3388" 'https://192.168.99.100:8443/api?timeout=32s'

Related

Kubernetes on worker node - kubelet.service not starting

I am trying to setup a new worker-node on CentOS-7.9 with following commands.
# setenforce 0
# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
# firewall disabled already.
# swapoff -a
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
# yum install kubeadm -y
# systemctl enable kubelet
# systemctl start kubelet
But kubelet service status shows below error.
# systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Wed 2020-12-02 16:49:22 IST; 3s ago
Docs: https://kubernetes.io/docs/
Process: 4442 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 4442 (code=exited, status=255)
Dec 02 16:49:22 k8s-node-01 systemd[1]: Unit kubelet.service entered failed state.
Dec 02 16:49:22 k8s-node-01 kubelet[4442]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/...81 +0x4f
Dec 02 16:49:22 k8s-node-01 kubelet[4442]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
Dec 02 16:49:22 k8s-node-01 kubelet[4442]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/...58 +0x8a
Dec 02 16:49:22 k8s-node-01 kubelet[4442]: goroutine 47 [select]:
Dec 02 16:49:22 k8s-node-01 kubelet[4442]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc000050be0)
Dec 02 16:49:22 k8s-node-01 kubelet[4442]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/...4 +0x105
Dec 02 16:49:22 k8s-node-01 kubelet[4442]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
Dec 02 16:49:22 k8s-node-01 kubelet[4442]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/...32 +0x57
Dec 02 16:49:22 k8s-node-01 systemd[1]: kubelet.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
I have below Kubernetes & docker version installed.
# kubelet --version
Kubernetes v1.19.4
# docker --version
Docker version 19.03.14, build 5eb3275d40
Also tried to join but even this fails.
# kubeadm join 65.66.67.68:6443 --token tu7qvt.1rfzhnxevg8m792h --discovery-token-ca-cert-hash sha256:48109668a4eadfs3c0c13a04d24a99bd82ff2eredefab6be6b78aadeead358074ee
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 65.66.67.55:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 65.66.67.55:10248: connect: connection refused.
error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher
With -v=9 option:-
# kubeadm join 65.66.67.68:6443 --token tu7qvt.1rfzhnxevg8m792h --discovery-token-ca-cert-hash sha256:48109668a4eadfs3c0c13a04d24a99bd82ff2eredefab6be6b78aadeead358074ee -v=9
I1203 10:25:29.374052 11716 round_trippers.go:423] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.19.4 (linux/amd64) kubernetes/d360454" 'https://65.66.67.68:6443/api/v1/nodes/k8s-node-01?timeout=10s'
I1203 10:25:29.376358 11716 round_trippers.go:443] GET https://65.66.67.68:6443/api/v1/nodes/k8s-node-01?timeout=10s 404 Not Found in 2 milliseconds
I1203 10:25:29.376406 11716 round_trippers.go:449] Response Headers:
I1203 10:25:29.376411 11716 round_trippers.go:452] Content-Type: application/json
I1203 10:25:29.376415 11716 round_trippers.go:452] Content-Length: 192
I1203 10:25:29.376419 11716 round_trippers.go:452] Date: Thu, 03 Dec 2020 04:55:29 GMT
I1203 10:25:29.376423 11716 round_trippers.go:452] Cache-Control: no-cache, private
I1203 10:25:29.376443 11716 request.go:1097] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"k8s-node-01\" not found","reason":"NotFound","details":{"name":"k8s-node-01","kind":"nodes"},"code":404}
timed out waiting for the condition
error uploading crisocket
What could be the wrong in installation? Any direction would be helpful.
Node has joined the cluster after commenting the entries from /etc/resolv.conf file then once node has joined to the cluster successfully again Un-commented. Now on my master all the namespaces and nodes are running fine.

kubeadm join failure, error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition

When join node :
sudo kubeadm join 172.16.7.101:6443 --token 4mya3g.duoa5xxuxin0l6j3 --discovery-token-ca-cert-hash sha256:bba76ac7a207923e8cae0c466dac166500a8e0db43fb15ad9018b615bdbabeb2
The outputs:
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition
And systemctl status kubelet:
node#node:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Wed 2019-04-17 06:20:56 UTC; 12min ago
Docs: https://kubernetes.io/docs/home/
Main PID: 26716 (kubelet)
Tasks: 16 (limit: 1111)
CGroup: /system.slice/kubelet.service
└─26716 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml -
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.022384 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.073969 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Unauthorized
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.122820 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.228838 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.273153 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthorized
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.330578 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.431114 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.473501 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.531294 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.632347 26716 kubelet.go:2244] node "node" not found
To Unauthorized I checked at master with kubeadm token list, token is valid.
So what's the problem? Thanks a lot.
Please verify pre and post installation steps here:
Please verify also the status of your services enabled and running, docker env.
sudo systemctl enable docker
sudo systemctl enable kubelet
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet
Are the results the same if you run init command with --ignore-preflight-errors=all
For more details please use also "journalctl -u kubelet"
Having more details from your logs, please take a look at "github - kubeadm/issues" here:
Please provide more details about you env in order to recreate this issue and share with your additional findings.
Could you please perform another test and run kubeadm init on your worker node, in the same way as on the first node (in short please create second master node) just to verify your working env.

Failed to create pod sandbox [flannel]

I am running into this error on random pods. Thank you #matthew-l-daniel for the comment - as I didn't know where to start.
Here is the contents of /opt/cni/bin on the node
:/opt/cni/bin$ ls
bridge host-local loopback
Here are the kubelet logs for a container that failed.
Jan 30 15:42:00 ip-172-20-39-216 kubelet[32233]: E0130 15:42:00.924370 32233 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "postgres-core-0_service-master-459cf23(d8acae2f-24a2-11e9-b79c-0a0d1213cce2)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "postgres-core-0": Error response from daemon: grpc: the connection is unavailable
Jan 30 15:42:00 ip-172-20-39-216 kubelet[32233]: E0130 15:42:00.924380 32233 kuberuntime_manager.go:647] createPodSandbox for pod "postgres-core-0_service-master-459cf23(d8acae2f-24a2-11e9-b79c-0a0d1213cce2)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "postgres-core-0": Error response from daemon: grpc: the connection is unavailable
Jan 30 15:42:00 ip-172-20-39-216 kubelet[32233]: E0130 15:42:00.924427 32233 pod_workers.go:186] Error syncing pod d8acae2f-24a2-11e9-b79c-0a0d1213cce2 ("postgres-core-0_service-master-459cf23(d8acae2f-24a2-11e9-b79c-0a0d1213cce2)"), skipping: failed to "CreatePodSandbox" for "postgres-core-0_service-master-459cf23(d8acae2f-24a2-11e9-b79c-0a0d1213cce2)" with CreatePodSandboxError: "CreatePodSandbox for pod \"postgres-core-0_service-master-459cf23(d8acae2f-24a2-11e9-b79c-0a0d1213cce2)\" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod \"postgres-core-0\": Error response from daemon: grpc: the connection is unavailable"
As for flannel container logs, there are many flannel pods running - and all are healthy.
Kubernetes v 1.10.11
Docker version 17.03.2-ce, build f5ec1e2
Flannel logs
E0130 15:34:16.536354 1 vxlan_network.go:187] DelFDB failed: no such file or directory
E0130 15:34:16.536411 1 vxlan_network.go:191] failed to delete vxlanRoute (100.107.178.0/24 -> 100.107.178.0): no such process
E0130 17:33:44.848163 1 vxlan_network.go:187] DelFDB failed: no such file or directory
E0130 17:33:44.848219 1 vxlan_network.go:191] failed to delete vxlanRoute (100.107.201.0/24 -> 100.107.201.0): no such process

kubeadm init 1.9 hangs with vsphere cloud provider

kubeadm init seems to be hanging when I started using vsphere cloud provider. Followed instructions from here - Anybody got it working with 1.9?
root#master-0:~# kubeadm init --config /tmp/kube.yaml
[init] Using Kubernetes version: v1.9.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING Hostname]: hostname "master-0" could not be reached
[WARNING Hostname]: hostname "master-0" lookup master-0 on 8.8.8.8:53: no such host
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master-0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.11.0.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
Master os details
root#master-0:~# uname -r
4.4.0-21-generic
root#master-0:~# docker version
Client:
Version: 17.03.2-ce
API version: 1.27
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 03:35:14 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.2-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 03:35:14 2017
OS/Arch: linux/amd64
Experimental: false
root#master-0:~# cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04 LTS (Xenial Xerus)"
ID=ubuntu
kubelet service seems to be running fine
root#master-0:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf, 90-local-extras.conf
Active: active (running) since Mon 2018-01-22 11:25:00 UTC; 13min ago
Docs: http://kubernetes.io/docs/
Main PID: 4270 (kubelet)
Tasks: 13 (limit: 512)
Memory: 37.6M
CPU: 11.626s
CGroup: /system.slice/kubelet.service
└─4270 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeco
nfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10
--cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.cr
t --cadvisor-port=0 --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki
journalctl -f -u kubelet has some connection refused errors which probably networking service is missing. Those errors should go away when networking service is installed post kubeadm init
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.759764 1184 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.761350 1184 controller.go:114] kubelet confi
g controller: starting controller
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.762632 1184 controller.go:118] kubelet confi
g controller: validating combination of defaults and flags
Jan 22 11:17:46 localhost systemd[1]: Started Kubernetes systemd probe.
Jan 22 11:17:46 localhost kubelet[1184]: W0122 11:17:46.070619 1184 cni.go:171] Unable to update cni
config: No networks found in /etc/cni/net.d
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.081384 1184 server.go:182] Version: v1.9.1
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.081417 1184 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.082271 1184 plugins.go:101] No cloud provide
r specified.
Jan 22 11:17:46 localhost kubelet[1184]: error: failed to run Kubelet: unable to load bootstrap kubecon
fig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILU
RE
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Unit entered failed state.
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 22 11:17:56 localhost systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Jan 22 11:17:56 localhost systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jan 22 11:17:56 localhost systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.410883 1229 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.411198 1229 controller.go:114] kubelet confi
g controller: starting controller
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.411353 1229 controller.go:118] kubelet confi
g controller: validating combination of defaults and flags
Jan 22 11:17:56 localhost systemd[1]: Started Kubernetes systemd probe.
Jan 22 11:17:56 localhost kubelet[1229]: W0122 11:17:56.424264 1229 cni.go:171] Unable to update cni
config: No networks found in /etc/cni/net.d
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429102 1229 server.go:182] Version: v1.9.1
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429156 1229 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429247 1229 plugins.go:101] No cloud provide
r specified.
Jan 22 11:17:56 localhost kubelet[1229]: E0122 11:17:56.461608 1229 certificate_manager.go:314] Fail
ed while requesting a signed certificate from the master: cannot create certificate signing request: Po
st https://10.11.0.101:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: dial tcp 10.11
.0.101:6443: getsockopt: connection refused
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.491374 1229 server.go:428] --cgroups-per-qos
enabled, but --cgroup-root was not specified. defaulting to /
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.492069 1229 container_manager_linux.go:242]
container manager verified user specified cgroup-root exists: /
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.492102 1229 container_manager_linux.go:247]
Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: Kubelet
CgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootD
ir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemRe
servedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvict
ionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeri
od:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1
} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Pe
rcentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quan
tity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] Experiment
alCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
docker ps, controller & scheduler logs
root#master-0:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6db549891439 677911f7ae8f "kube-scheduler --..." About an hour ago Up About an hour k8s_kube-scheduler_kube-scheduler-master-0_kube-system_df32e281019039e73be77e3f53d09596_0
4f7ddefbd86e 4978f9a64966 "kube-controller-m..." About an hour ago Up About an hour k8s_kube-controller-manager_kube-controller-manager-master-0_kube-system_34bad395be69e74db6304d6c4218c536_0
18604db89db6 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-scheduler-master-0_kube-system_df32e281019039e73be77e3f53d09596_0
252b86ea4b5e gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-controller-manager-master-0_kube-system_34bad395be69e74db6304d6c4218c536_0
4021061bf8a6 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-apiserver-master-0_kube-system_7a3ae9279d0ca7b4ada8333fbe7442ed_0
4f94163d313b gcr.io/google_containers/etcd-amd64:3.1.10 "etcd --name=etcd0..." About an hour ago Up About an hour 0.0.0.0:2379-2380->2379-2380/tcp, 0.0.0.0:4001->4001/tcp, 7001/tcp etcd
root#master-0:~# docker logs -f 4f7ddefbd86e
I0122 11:25:06.253706 1 controllermanager.go:108] Version: v1.9.1
I0122 11:25:06.258712 1 leaderelection.go:174] attempting to acquire leader lease...
E0122 11:25:06.259448 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:09.711377 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:13.969270 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:17.564964 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:20.616174 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
root#master-0:~# docker logs -f 6db549891439
W0122 11:25:06.285765 1 server.go:159] WARNING: all flags than --config are deprecated. Please begin using a config file ASAP.
I0122 11:25:06.292865 1 server.go:551] Version: v1.9.1
I0122 11:25:06.295776 1 server.go:570] starting healthz server on 127.0.0.1:10251
E0122 11:25:06.295947 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: Get https://10.11.0.101:6443/apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296027 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: Get https://10.11.0.101:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296092 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: Get https://10.11.0.101:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296160 1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: Get https://10.11.0.101:6443/api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296218 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: Get https://10.11.0.101:6443/apis/apps/v1beta1/statefulsets?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.297374 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: Get https://10.11.0.101:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
There was a bug in the controller manager when starting with the vsphere cloud provider. See https://github.com/kubernetes/kubernetes/issues/57279, fixed in 1.9.2

Can't install Kubernetes cluster on CentOS 7

Follow this guide to install Kubernetes:
https://www.linuxtechi.com/install-kubernetes-1-7-centos7-rhel7/
When went to kubeadm init step, got error:
$ kubeadm init --skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing sa key.
[certificates] Using the existing front-proxy-ca certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Using existing up-to-date KubeConfig file: "admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by that:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- There is no internet connection; so the kubelet can't pull the following control plane images:
- gcr.io/google_containers/kube-apiserver-amd64:v1.8.3
- gcr.io/google_containers/kube-controller-manager-amd64:v1.8.3
- gcr.io/google_containers/kube-scheduler-amd64:v1.8.3
You can troubleshoot this for example with the following commands if you're on a systemd-powered system:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
couldn't initialize a Kubernetes cluster
When check systemctl status kubelet:
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Fri 2017-11-10 05:34:12 UTC; 6s ago
Docs: http://kubernetes.io/docs/
Process: 29927 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE)
Main PID: 29927 (code=exited, status=1/FAILURE)
Nov 10 05:34:12 master systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Nov 10 05:34:12 master systemd[1]: Unit kubelet.service entered failed state.
Nov 10 05:34:12 master systemd[1]: kubelet.service failed.
When check journalctl -xeu kubelet:
Nov 10 05:35:15 master systemd[1]: kubelet.service holdoff time over, scheduling restart.
Nov 10 05:35:15 master systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Nov 10 05:35:15 master systemd[1]: Starting kubelet: The Kubernetes Node Agent...
-- Subject: Unit kubelet.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has begun starting up.
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.364837 30174 feature_gate.go:156] feature gates: map[]
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.364917 30174 controller.go:114] kubelet config controller: starting controller
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.364921 30174 controller.go:118] kubelet config controller: validating combination of defaults and flags
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.375149 30174 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.375226 30174 client.go:95] Start docker client with request timeout=2m0s
Nov 10 05:35:15 master kubelet[30174]: W1110 05:35:15.377200 30174 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.382890 30174 feature_gate.go:156] feature gates: map[]
Nov 10 05:35:15 master kubelet[30174]: W1110 05:35:15.383011 30174 server.go:289] --cloud-provider=auto-detect is deprecated. The desired cloud provider should be set explicitly
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.408678 30174 certificate_manager.go:361] Requesting new certificate.
Nov 10 05:35:15 master kubelet[30174]: E1110 05:35:15.409287 30174 certificate_manager.go:284] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post https://10.0.2.15:6443/apis/certificates.k8s.io/v1beta1/certifica
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.411480 30174 manager.go:149] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"
Nov 10 05:35:15 master kubelet[30174]: W1110 05:35:15.425796 30174 manager.go:157] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp [::1]:15441: getsockopt: connection refused
Nov 10 05:35:15 master kubelet[30174]: W1110 05:35:15.426006 30174 manager.go:166] unable to connect to CRI-O api service: Get http://%2Fvar%2Frun%2Fcrio.sock/info: dial unix /var/run/crio.sock: connect: no such file or directory
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.440364 30174 fs.go:139] Filesystem UUIDs: map[4537d533-47ff-463c-bffc-7ce294d9c93a:/dev/dm-1 598bbfb9-027e-4f52-a5b3-c4d3d1fbc2b8:/dev/dm-0 8ffa0ee9-e1a8-4c03-acce-b65b342c6935:/dev/sda2]
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.440395 30174 fs.go:140] Filesystem partitions: map[tmpfs:{mountpoint:/dev/shm major:0 minor:17 fsType:tmpfs blockSize:0} /dev/mapper/VolGroup00-LogVol00:{mountpoint:/var/lib/docker/overlay major:253 minor:0 fsType:xf
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.441589 30174 manager.go:216] Machine: {NumCores:1 CpuFrequency:3100000 MemoryCapacity:1040621568 HugePages:[{PageSize:2048 NumPages:0}] MachineID:a0b78b0170c248288e172d5196d59063 SystemUUID:A0B78B01-70C2-4828-8E17-2D
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.446544 30174 manager.go:222] Version: {KernelVersion:3.10.0-693.5.2.el7.x86_64 ContainerOsVersion:CentOS Linux 7 (Core) DockerVersion:17.09.0-ce DockerAPIVersion:1.32 CadvisorVersion: CadvisorRevision:}
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.447201 30174 server.go:422] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.451260 30174 container_manager_linux.go:252] container manager verified user specified cgroup-root exists: /
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.451293 30174 container_manager_linux.go:257] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.451403 30174 container_manager_linux.go:288] Creating device plugin handler: false
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.451616 30174 kubelet.go:273] Adding manifest file: /etc/kubernetes/manifests
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.451710 30174 kubelet.go:283] Watching apiserver
Nov 10 05:35:15 master kubelet[30174]: E1110 05:35:15.480061 30174 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster&resourceVersion=0: dial tcp 10.0.2.15
Nov 10 05:35:15 master kubelet[30174]: E1110 05:35:15.500829 30174 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection r
Nov 10 05:35:15 master kubelet[30174]: E1110 05:35:15.500917 30174 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&resourceVersion=0: dial tcp 10.
Nov 10 05:35:15 master kubelet[30174]: W1110 05:35:15.541334 30174 kubelet_network.go:69] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.541369 30174 kubelet.go:517] Hairpin mode set to "hairpin-veth"
Nov 10 05:35:15 master kubelet[30174]: W1110 05:35:15.541616 30174 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 10 05:35:15 master kubelet[30174]: W1110 05:35:15.548689 30174 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 10 05:35:15 master kubelet[30174]: W1110 05:35:15.553143 30174 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 10 05:35:15 master kubelet[30174]: I1110 05:35:15.553164 30174 docker_service.go:207] Docker cri networking managed by cni
Nov 10 05:35:15 master kubelet[30174]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
Nov 10 05:35:15 master systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Nov 10 05:35:15 master systemd[1]: Unit kubelet.service entered failed state.
Nov 10 05:35:15 master systemd[1]: kubelet.service failed.
The key point in logs misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgroupfs"
Make sure that the cgroup driver used by kubelet is the same as the one used by Docker.
To ensure compatability you can either update Docker, or ensure the --cgroup-driver kubelet flag is set to the same value as Docker (e.g. cgroupfs)
-- Installing kubeadm
Either update Docker to use systemd
cat << EOF > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
And restart docker service.
Or update kubelet to use cgroupfs
sed -i -E 's/--cgroup-driver=systemd/--cgroup-driver=cgroupfs/' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
And restart the kubelet by systemctl restart kubelet.service.