Installing kubernetes cluster on master node - kubernetes

I am new to container worrld and trying to setup a kubernetes cluster locally in two linux VMs. During the cluster initialization it got stuck at
[apiclient] Created API client, waiting for the control plane to
become ready
I have followed the pre-flight check steps,
[root#lm--kube-glusterfs--central ~]# kubeadm init --pod-network-cidr=10.244.0.0/16
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.6.0
[init] Using Authorization mode: RBAC
[preflight] Running pre-flight checks
[preflight] WARNING: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Starting the kubelet service
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [lm--kube-glusterfs--central kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.99.7.215]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
OS : Red Hat Enterprise Linux Server release 7.4 (Maipo)
Kuberneter version :
kubeadm-1.6.0-0.x86_64.rpm
kubectl-1.6.0-0.x86_64.rpm
kubelet-1.6.0-0.x86_64.rpm
kubernetes-cni-0.6.0-0.x86_64.rpm
cri-tools-1.12.0-0.x86_64.rpm
How to debug the issue or is there any version mismatch of kube versions. Same was working before when i use google.cloud.repo to install yum -y install kubelet kubeadm kubectl .
I couldnot use repo due to some firewall issues. Hence using the rpms to install.
After executing the following command,journalctl -xeu kubelet
Jul 02 09:45:09 lm--son-config-cn--central kubelet[28749]: W0702 09:45:09.841246 28749 kubelet_network.go:70] Hairpin mode set to "promiscuous-bridge" but kubenet
Jul 02 09:45:09 lm--son-config-cn--central kubelet[28749]: I0702 09:45:09.841304 28749 kubelet.go:494] Hairpin mode set to "hairpin-veth"
Jul 02 09:45:09 lm--son-config-cn--central kubelet[28749]: W0702 09:45:09.845626 28749 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 02 09:45:09 lm--son-config-cn--central kubelet[28749]: I0702 09:45:09.857969 28749 docker_service.go:187] Docker cri networking managed by kubernetes.io/no-op
Jul 02 09:45:09 lm--son-config-cn--central kubelet[28749]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs
Jul 02 09:45:09 lm--son-config-cn--central systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Jul 02 09:45:09 lm--son-config-cn--central systemd[1]: Unit kubelet.service entered failed state.
Jul 02 09:45:09 lm--son-config-cn--central systemd[1]: kubelet.service failed.
Jul 02 09:45:20 lm--son-config-cn--central systemd[1]: kubelet.service holdoff time over, scheduling restart.
Jul 02 09:45:20 lm--son-config-cn--central systemd[1]: Started Kubernetes Kubelet Server.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Jul 02 09:45:20 lm--son-config-cn--central systemd[1]: Starting Kubernetes Kubelet Server...
-- Subject: Unit kubelet.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has begun starting up.
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.251465 28772 feature_gate.go:144] feature gates: map[]
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: W0702 09:45:20.251889 28772 server.go:469] No API client: no api servers specified
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.252009 28772 docker.go:364] Connecting to docker on unix:///var/run/docker.sock
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.252049 28772 docker.go:384] Start docker client with request timeout=2m0s
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: W0702 09:45:20.259436 28772 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.275674 28772 manager.go:143] cAdvisor running in container: "/system.slice"
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: W0702 09:45:20.317509 28772 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial r
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.328881 28772 fs.go:117] Filesystem partitions: map[/dev/vda2:{mountpoint:/ major:253 mino
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.335711 28772 manager.go:198] Machine: {NumCores:8 CpuFrequency:2095078 MemoryCapacity:337
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: [7] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:4194304 Type:Unifie
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.338001 28772 manager.go:204] Version: {KernelVersion:3.10.0-693.11.6.el7.x86_64 Container
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: W0702 09:45:20.338967 28772 server.go:350] No api server defined - no events will be sent to API server.
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.338980 28772 server.go:509] --cgroups-per-qos enabled, but --cgroup-root was not specifie
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.342041 28772 container_manager_linux.go:245] container manager verified user specified cg
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.342071 28772 container_manager_linux.go:250] Creating Container Manager object based on N
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: W0702 09:45:20.346505 28772 kubelet_network.go:70] Hairpin mode set to "promiscuous-bridge" but kubenet
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.346571 28772 kubelet.go:494] Hairpin mode set to "hairpin-veth"
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: W0702 09:45:20.351473 28772 cni.go:157] Unable to update cni config: No networks found in /etc/cni/net.d
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: I0702 09:45:20.363583 28772 docker_service.go:187] Docker cri networking managed by kubernetes.io/no-op
Jul 02 09:45:20 lm--son-config-cn--central kubelet[28772]: error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs
Jul 02 09:45:20 lm--son-config-cn--central systemd[1]: kubelet.service: main process exited, code=exited, status=1/FAILURE
Jul 02 09:45:20 lm--son-config-cn--central systemd[1]: Unit kubelet.service entered failed state.
Jul 02 09:45:20 lm--son-config-cn--central systemd[1]: kubelet.service failed.

Related to issue
There are a few fixes shown there , all you need is to change the cgroup driver to systemd

Related

kubelet.service: Unit entered failed state in not ready state node error from kubernetes cluster

I am trying to deploy an springboot microservices in kubernetes cluster having 1 master and 2 worker node. When I am trying to get the node state using the command sudo kubectl get nodes, I am getting one of my worker node is not ready. It showing not ready in status.
When I am applying to troubleshoot the following command,
sudo journalctl -u kubelet
I am getting response like kubelet.service: Unit entered failed state and kubelet service stopped. The following is the response what I am getting when applying the command sudo journalctl -u kubelet.
-- Logs begin at Fri 2020-01-03 04:56:18 EST, end at Fri 2020-01-03 05:32:47 EST. --
Jan 03 04:56:25 MILDEVKUB050 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jan 03 04:56:31 MILDEVKUB050 kubelet[970]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi
Jan 03 04:56:31 MILDEVKUB050 kubelet[970]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi
Jan 03 04:56:32 MILDEVKUB050 kubelet[970]: I0103 04:56:32.053962 970 server.go:416] Version: v1.17.0
Jan 03 04:56:32 MILDEVKUB050 kubelet[970]: I0103 04:56:32.084061 970 plugins.go:100] No cloud provider specified.
Jan 03 04:56:32 MILDEVKUB050 kubelet[970]: I0103 04:56:32.235928 970 server.go:821] Client rotation is on, will bootstrap in background
Jan 03 04:56:32 MILDEVKUB050 kubelet[970]: I0103 04:56:32.280173 970 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-curre
Jan 03 04:56:38 MILDEVKUB050 kubelet[970]: I0103 04:56:38.107966 970 server.go:641] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Jan 03 04:56:38 MILDEVKUB050 kubelet[970]: F0103 04:56:38.109401 970 server.go:273] failed to run Kubelet: running with swap on is not supported, please disable swa
Jan 03 04:56:38 MILDEVKUB050 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Jan 03 04:56:38 MILDEVKUB050 systemd[1]: kubelet.service: Unit entered failed state.
Jan 03 04:56:38 MILDEVKUB050 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 03 04:56:48 MILDEVKUB050 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Jan 03 04:56:48 MILDEVKUB050 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jan 03 04:56:48 MILDEVKUB050 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jan 03 04:56:48 MILDEVKUB050 kubelet[1433]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
Jan 03 04:56:48 MILDEVKUB050 kubelet[1433]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
Jan 03 04:56:48 MILDEVKUB050 kubelet[1433]: I0103 04:56:48.901632 1433 server.go:416] Version: v1.17.0
Jan 03 04:56:48 MILDEVKUB050 kubelet[1433]: I0103 04:56:48.907654 1433 plugins.go:100] No cloud provider specified.
Jan 03 04:56:48 MILDEVKUB050 kubelet[1433]: I0103 04:56:48.907806 1433 server.go:821] Client rotation is on, will bootstrap in background
Jan 03 04:56:48 MILDEVKUB050 kubelet[1433]: I0103 04:56:48.947107 1433 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-curr
Jan 03 04:56:49 MILDEVKUB050 kubelet[1433]: I0103 04:56:49.263777 1433 server.go:641] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to
Jan 03 04:56:49 MILDEVKUB050 kubelet[1433]: F0103 04:56:49.264219 1433 server.go:273] failed to run Kubelet: running with swap on is not supported, please disable sw
Jan 03 04:56:49 MILDEVKUB050 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Jan 03 04:56:49 MILDEVKUB050 systemd[1]: kubelet.service: Unit entered failed state.
Jan 03 04:56:49 MILDEVKUB050 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 03 04:56:59 MILDEVKUB050 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Jan 03 04:56:59 MILDEVKUB050 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jan 03 04:56:59 MILDEVKUB050 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jan 03 04:56:59 MILDEVKUB050 kubelet[1500]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
Jan 03 04:56:59 MILDEVKUB050 kubelet[1500]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --conf
Jan 03 04:56:59 MILDEVKUB050 kubelet[1500]: I0103 04:56:59.712729 1500 server.go:416] Version: v1.17.0
Jan 03 04:56:59 MILDEVKUB050 kubelet[1500]: I0103 04:56:59.714927 1500 plugins.go:100] No cloud provider specified.
Jan 03 04:56:59 MILDEVKUB050 kubelet[1500]: I0103 04:56:59.715248 1500 server.go:821] Client rotation is on, will bootstrap in background
Jan 03 04:56:59 MILDEVKUB050 kubelet[1500]: I0103 04:56:59.763508 1500 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-curr
Jan 03 04:56:59 MILDEVKUB050 kubelet[1500]: I0103 04:56:59.956706 1500 server.go:641] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to
Jan 03 04:56:59 MILDEVKUB050 kubelet[1500]: F0103 04:56:59.957078 1500 server.go:273] failed to run Kubelet: running with swap on is not supported, please disable sw
Jan 03 04:56:59 MILDEVKUB050 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Jan 03 04:56:59 MILDEVKUB050 systemd[1]: kubelet.service: Unit entered failed state.
Jan 03 04:56:59 MILDEVKUB050 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 03 04:57:10 MILDEVKUB050 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Jan 03 04:57:10 MILDEVKUB050 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jan 03 04:57:10 MILDEVKUB050 systemd[1]: Started kubelet: The Kubernetes Node Agent.
log file: service: Unit entered failed state
I tried by restarting the kubelet. But still there is no change in node state. Not ready state only.
Updates
When I am trying the command systemctl list-units --type=swap --state=active , then I am getting the following response,
docker#MILDEVKUB040:~$ systemctl list-units --type=swap --state=active
UNIT LOAD ACTIVE SUB DESCRIPTION
dev-mapper-MILDEVDCR01\x2d\x2dvg\x2dswap_1.swap loaded active active /dev/mapper/MILDEVDCR01--vg-swap_1
Important
When I am getting these kind of issue with node not ready, each time I need to disable the swap and need to reload the daemon and kubelet. After that node becomes ready state. And again I need to repeat the same.
How can I find a permanent solution for this?
failed to run Kubelet: running with swap on is not supported, please disable swap
You need to disable swap on the system for kubelet to work. You can disable swap with sudo swapoff -a
For systemd based systems, there is another way of enabling swap partitions using swap units which gets enabled whenever systemd reloads even if you have turned off swap using swapoff -a
https://www.freedesktop.org/software/systemd/man/systemd.swap.html
Check if you have any swap units using systemctl list-units --type=swap --state=active
You can permanently disable any active swap unit with systemctl mask <unit name>.
Note: Do not use systemctl disable <unit name> to disable the swap unit as swap unit will be activated again when systemd reloads. Use systemctl mask <unit name> only.
To make sure swap doesn't get re-enabled when your system reboots due to power cycle or any other reason, remove or comment out the swap entries in /etc/fstab
Summarizing:
Run sudo swapoff -a
Check if you have swap units with command systemctl list-units --type=swap --state=active. If there are any active swap units, mask them using systemctl mask <unit name>
Remove swap entries in /etc/fstab
The root cause is the swap space. To disable completely follow steps:
run swapoff -a: this will immediately disable swap but will activate on restart
remove any swap entry from /etc/fstab
reboot the system.
If the swap is gone, good. If, for some reason, it is still here, you
had to remove the swap partition. Repeat steps 1 and 2 and, after
that, use fdisk or parted to remove the (now unused) swap partition.
Use great care here: removing the wrong partition will have disastrous
effects!
reboot
This should resolve your issue.
Removing /etc/fstab will give the vm error, I think we should find another way to solve this issue. I tried to remove the fstab, all command (install, ping and other command) error.

Kubeam failed | service is down

I try to join worker node to k8s kluser.
sudo kubeadm join 10.2.67.201:6443 --token x --discovery-token-ca-cert-hash sha2566 x
But i get error on this stage:
curl -sSL http://localhost:10248/healthz'
failed with error: Get http://localhost:10248/healthz: dial tcp
Error:
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I see that kubelet service is down:
journalctl -xeu kubelet
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished shutting down.
Nov 22 15:49:00 s001as-ceph-node-03 systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Nov 22 15:49:00 s001as-ceph-node-03 kubelet[286703]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag.
Nov 22 15:49:00 s001as-ceph-node-03 kubelet[286703]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag.
Nov 22 15:49:00 s001as-ceph-node-03 kubelet[286703]: F1122 15:49:00.224350 286703 server.go:251] unable to load client CA file /etc/kubernetes/ssl/ca.crt: open /etc/kubernetes/ssl/ca.cr
Nov 22 15:49:00 s001as-ceph-node-03 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Nov 22 15:49:00 s001as-ceph-node-03 systemd[1]: Unit kubelet.service entered failed state.
Nov 22 15:49:00 s001as-ceph-node-03 systemd[1]: kubelet.service failed.
Nov 22 15:49:10 s001as-ceph-node-03 systemd[1]: kubelet.service holdoff time over, scheduling restart.
Nov 22 15:49:10 s001as-ceph-node-03 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished shutting down.
Nov 22 15:49:10 s001as-ceph-node-03 systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Nov 22 15:49:10 s001as-ceph-node-03 kubelet[286717]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag.
Nov 22 15:49:10 s001as-ceph-node-03 kubelet[286717]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag.
Nov 22 15:49:10 s001as-ceph-node-03 kubelet[286717]: F1122 15:49:10.476478 286717 server.go:251] unable to load client CA file /etc/kubernetes/ssl/ca.crt: open /etc/kubernetes/ssl/ca.cr
Nov 22 15:49:10 s001as-ceph-node-03 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Nov 22 15:49:10 s001as-ceph-node-03 systemd[1]: Unit kubelet.service entered failed state.
Nov 22 15:49:10 s001as-ceph-node-03 systemd[1]: kubelet.service failed.
I fixed it.
Just copy /etc/kubernetes/pki/ca.crt into /etc/kubernetes/ssl/ca.crt

Error in starting the Kubernetes minikube cluster

I have Kubernetes v1.10.0 and minikube v0.28 installed in MacOS 10.13.5. But while starting the minikube, I am getting constant errors:
$ minikube start
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0708 16:13:43.193267 13864 start.go:294] Error starting cluster: timed out waiting to unmark master: getting node minikube: Get https://192.168.99.100:8443/api/v1/nodes/minikube: dial tcp 192.168.99.100:8443: i/o timeout
i have also tried $minikube start and $minikube delete as well as reinstallation of minikubes versions but that didn’t help.
$ minikube version
minikube version: v0.28.0
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout
Even I am not able to ping the virtual machine . But I can see the minikube VM in my virtualBox in Running status.
Minikube Logs:
F0708 16:46:10.394098 2651 server.go:233] failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory
Jul 08 16:46:10 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
Jul 08 16:46:10 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jul 08 16:46:20 minikube systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Jul 08 16:46:20 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Jul 08 16:46:20 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jul 08 16:46:20 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jul 08 16:46:20 minikube kubelet[2730]: Flag --cadvisor-port has been deprecated, The default will change to 0 (disabled) in 1.12, and the cadvisor port will be removed entirely in 1.13
Jul 08 16:46:20 minikube kubelet[2730]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 08 16:46:20 minikube kubelet[2730]: Flag --allow-privileged has been deprecated, will be removed in a future version
Jul 08 16:46:20 minikube kubelet[2730]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 08 16:46:20 minikube kubelet[2730]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 08 16:46:20 minikube kubelet[2730]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 08 16:46:20 minikube kubelet[2730]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 08 16:46:20 minikube kubelet[2730]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 08 16:46:20 minikube kubelet[2730]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.694078 2730 feature_gate.go:226] feature gates: &{{} map[]}
Jul 08 16:46:20 minikube kubelet[2730]: W0708 16:46:20.702932 2730 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.703121 2730 server.go:376] Version: v1.10.0
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.703187 2730 feature_gate.go:226] feature gates: &{{} map[]}
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.703285 2730 plugins.go:89] No cloud provider specified.
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.726584 2730 server.go:613] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.726934 2730 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: /
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.726983 2730 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true}
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.728063 2730 container_manager_linux.go:266] Creating device plugin manager: true
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.728189 2730 state_mem.go:36] [cpumanager] initializing new in-memory state store
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.728313 2730 state_file.go:82] [cpumanager] state file: created new state file "/var/lib/kubelet/cpu_manager_state"
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.728418 2730 kubelet.go:272] Adding pod path: /etc/kubernetes/manifests
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.728460 2730 kubelet.go:297] Watching apiserver
Jul 08 16:46:20 minikube kubelet[2730]: E0708 16:46:20.734450 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.99.100:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:20 minikube kubelet[2730]: E0708 16:46:20.734566 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.99.100:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:20 minikube kubelet[2730]: E0708 16:46:20.735067 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.99.100:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:20 minikube kubelet[2730]: W0708 16:46:20.735453 2730 kubelet_network.go:139] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.735561 2730 kubelet.go:556] Hairpin mode set to "hairpin-veth"
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.744020 2730 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.744065 2730 client.go:104] Start docker client with request timeout=2m0s
Jul 08 16:46:20 minikube kubelet[2730]: W0708 16:46:20.750158 2730 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.752473 2730 docker_service.go:244] Docker cri networking managed by kubernetes.io/no-op
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.756618 2730 docker_service.go:249] Docker Info: &{ID:K2FS:LJJY:RYGP:JKVS:74G5:3HA4:L26I:VOFW:CGF5:JB6F:BMQV:G3GO Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:35 SystemTime:2018-07-08T16:46:20.753639901Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.16.14 OperatingSystem:Buildroot 2018.05 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc420122fc0 NCPU:2 MemTotal:2088189952 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:minikube Labels:[provider=virtualbox] ExperimentalBuild:false ServerVersion:17.12.1-ce ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:docker-runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9b55aab90508bd389d7654c4baf173a981477d55 Expected:9b55aab90508bd389d7654c4baf173a981477d55} RuncCommit:{ID:9f9c96235cc97674e935002fc3d78361b696a69e Expected:9f9c96235cc97674e935002fc3d78361b696a69e} InitCommit:{ID:N/A Expected:} SecurityOptions:[name=seccomp,profile=default]}
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.756704 2730 docker_service.go:262] Setting cgroupDriver to cgroupfs
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.768856 2730 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.772344 2730 kuberuntime_manager.go:186] Container runtime docker initialized, version: 17.12.1-ce, apiVersion: 1.35.0
Jul 08 16:46:20 minikube kubelet[2730]: W0708 16:46:20.772666 2730 probe.go:215] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.772949 2730 csi_plugin.go:61] kubernetes.io/csi: plugin initializing...
Jul 08 16:46:20 minikube kubelet[2730]: E0708 16:46:20.821518 2730 kubelet.go:1277] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.822056 2730 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.822138 2730 status_manager.go:140] Starting to sync pod status with apiserver
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.822179 2730 kubelet.go:1777] Starting kubelet main sync loop.
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.822215 2730 kubelet.go:1794] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.822420 2730 server.go:129] Starting to listen on 0.0.0.0:10250
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.822963 2730 server.go:299] Adding debug handlers to kubelet server.
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.823936 2730 volume_manager.go:247] Starting Kubelet Volume Manager
Jul 08 16:46:20 minikube kubelet[2730]: E0708 16:46:20.824420 2730 event.go:209] Unable to write event: 'Post https://192.168.99.100:8443/api/v1/namespaces/default/events: dial tcp 192.168.99.100:8443: getsockopt: connection refused' (may retry after sleeping)
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.825365 2730 server.go:944] Started kubelet
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.825492 2730 desired_state_of_world_populator.go:129] Desired state populator starts to run
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.922627 2730 kubelet.go:1794] skipping pod synchronization - [container runtime is down]
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.924838 2730 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 08 16:46:20 minikube kubelet[2730]: I0708 16:46:20.927054 2730 kubelet_node_status.go:82] Attempting to register node minikube
Jul 08 16:46:20 minikube kubelet[2730]: E0708 16:46:20.927586 2730 kubelet_node_status.go:106] Unable to register node "minikube" with API server: Post https://192.168.99.100:8443/api/v1/nodes: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:21 minikube kubelet[2730]: I0708 16:46:21.123837 2730 kubelet.go:1794] skipping pod synchronization - [container runtime is down]
Jul 08 16:46:21 minikube kubelet[2730]: I0708 16:46:21.127937 2730 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 08 16:46:21 minikube kubelet[2730]: I0708 16:46:21.130362 2730 kubelet_node_status.go:82] Attempting to register node minikube
Jul 08 16:46:21 minikube kubelet[2730]: E0708 16:46:21.130766 2730 kubelet_node_status.go:106] Unable to register node "minikube" with API server: Post https://192.168.99.100:8443/api/v1/nodes: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:21 minikube kubelet[2730]: I0708 16:46:21.524092 2730 kubelet.go:1794] skipping pod synchronization - [container runtime is down]
Jul 08 16:46:21 minikube kubelet[2730]: I0708 16:46:21.531377 2730 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 08 16:46:21 minikube kubelet[2730]: I0708 16:46:21.534027 2730 kubelet_node_status.go:82] Attempting to register node minikube
Jul 08 16:46:21 minikube kubelet[2730]: E0708 16:46:21.534443 2730 kubelet_node_status.go:106] Unable to register node "minikube" with API server: Post https://192.168.99.100:8443/api/v1/nodes: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:21 minikube kubelet[2730]: E0708 16:46:21.736215 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.99.100:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:21 minikube kubelet[2730]: E0708 16:46:21.741184 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.99.100:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:21 minikube kubelet[2730]: E0708 16:46:21.742867 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.99.100:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:22 minikube kubelet[2730]: I0708 16:46:22.324217 2730 kubelet.go:1794] skipping pod synchronization - [container runtime is down]
Jul 08 16:46:22 minikube kubelet[2730]: I0708 16:46:22.334948 2730 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 08 16:46:22 minikube kubelet[2730]: I0708 16:46:22.337989 2730 kubelet_node_status.go:82] Attempting to register node minikube
Jul 08 16:46:22 minikube kubelet[2730]: E0708 16:46:22.338431 2730 kubelet_node_status.go:106] Unable to register node "minikube" with API server: Post https://192.168.99.100:8443/api/v1/nodes: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:22 minikube kubelet[2730]: E0708 16:46:22.737509 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.99.100:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:22 minikube kubelet[2730]: E0708 16:46:22.741882 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.99.100:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:22 minikube kubelet[2730]: E0708 16:46:22.743575 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.99.100:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:22 minikube kubelet[2730]: I0708 16:46:22.908788 2730 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 08 16:46:22 minikube kubelet[2730]: I0708 16:46:22.910882 2730 cpu_manager.go:155] [cpumanager] starting with none policy
Jul 08 16:46:22 minikube kubelet[2730]: I0708 16:46:22.910923 2730 cpu_manager.go:156] [cpumanager] reconciling every 10s
Jul 08 16:46:22 minikube kubelet[2730]: I0708 16:46:22.910931 2730 policy_none.go:42] [cpumanager] none policy: Start
Jul 08 16:46:22 minikube kubelet[2730]: Starting Device Plugin manager
Jul 08 16:46:22 minikube kubelet[2730]: E0708 16:46:22.922215 2730 eviction_manager.go:246] eviction manager: failed to get get summary stats: failed to get node info: node "minikube" not found
Jul 08 16:46:23 minikube kubelet[2730]: E0708 16:46:23.739107 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.99.100:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:23 minikube kubelet[2730]: E0708 16:46:23.742661 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.99.100:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:23 minikube kubelet[2730]: E0708 16:46:23.744902 2730 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.99.100:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:23 minikube kubelet[2730]: I0708 16:46:23.925063 2730 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 08 16:46:23 minikube kubelet[2730]: I0708 16:46:23.936151 2730 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 08 16:46:23 minikube kubelet[2730]: I0708 16:46:23.936545 2730 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 08 16:46:23 minikube kubelet[2730]: I0708 16:46:23.938910 2730 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 08 16:46:23 minikube kubelet[2730]: I0708 16:46:23.946028 2730 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/099f1c2b79126109140a1f77e211df00-kubeconfig") pod "kube-scheduler-minikube" (UID: "099f1c2b79126109140a1f77e211df00")
Jul 08 16:46:23 minikube kubelet[2730]: W0708 16:46:23.946195 2730 status_manager.go:461] Failed to get status for pod "kube-scheduler-minikube_kube-system(099f1c2b79126109140a1f77e211df00)": Get https://192.168.99.100:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-minikube: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:23 minikube kubelet[2730]: I0708 16:46:23.946773 2730 kubelet_node_status.go:82] Attempting to register node minikube
Jul 08 16:46:23 minikube kubelet[2730]: E0708 16:46:23.947106 2730 kubelet_node_status.go:106] Unable to register node "minikube" with API server: Post https://192.168.99.100:8443/api/v1/nodes: dial tcp 192.168.99.100:8443: getsockopt: connection refused
Jul 08 16:46:23 minikube kubelet[2730]:
Its working as expected, earlier I was facing connectivity problem from host to vm , that time I was using hotspot internet connection. That might be the issue to connect my vm .
Now on Home wifi the minikube delete and minikube start works perfectly .
Thanks

Kubernetes master on GCE not display node on AWS EC2 [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I created master node on GCE using this commands:
gcloud compute instances create master --machine-type g1-small --zone europe-west1-d
gcloud compute addresses create myexternalip --region europe-west1
gcloud compute target-pools create kubernetes --region europe-west1
gcloud compute target-pools add-instances kubernetes --instances master --instances-zone europe-west1-d
gcloud compute forwarding-rules create kubernetes-forward --address myexternalip --region europe-west1 --ports 1-65535 --target-pool kubernetes
gcloud compute forwarding-rules describe kubernetes-forward
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
and opened all firewalls.
After it i created aws ec2 instance, opened firewalls and using:
kubeadm join --token 55d287.b540e254a280f853 ip:6443 --discovery-token-unsafe-skip-ca-verification
to connecting instance to cluster.
But in the master node it is not displayed
Docker version: 17.12
Kubernetes version: 1.9.3
UPD:
Output from node on aws ec2
systemctl status kubelet.service:
kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Sat 2018-02-24 20:23:53 UTC; 23s ago
Docs: http://kubernetes.io/docs/
Main PID: 30678 (kubelet)
Tasks: 5
Memory: 13.4M
CPU: 125ms
CGroup: /system.slice/kubelet.service
└─30678 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests -
Feb 24 20:23:53 ip-172-31-0-250 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Feb 24 20:23:53 ip-172-31-0-250 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Feb 24 20:23:53 ip-172-31-0-250 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: I0224 20:23:53.420375 30678 feature_gate.go:226] feature gates: &{{} map[]}
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: I0224 20:23:53.420764 30678 controller.go:114] kubelet config controller: starting controller
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: I0224 20:23:53.420944 30678 controller.go:118] kubelet config controller: validating combination of defaults and flags
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: W0224 20:23:53.425410 30678 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: I0224 20:23:53.444969 30678 server.go:182] Version: v1.9.3
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: I0224 20:23:53.445274 30678 feature_gate.go:226] feature gates: &{{} map[]}
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: I0224 20:23:53.445565 30678 plugins.go:101] No cloud provider specified.
journalctl -u kubelet:
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 24 20:15:12 ip-172-31-0-250 kubelet[30243]: I0224 20:15:12.819249 30243 feature_gate.go:226] feature gates: &{{} map[]}
Feb 24 20:15:12 ip-172-31-0-250 kubelet[30243]: I0224 20:15:12.821054 30243 controller.go:114] kubelet config controller: starting controller
Feb 24 20:15:12 ip-172-31-0-250 kubelet[30243]: I0224 20:15:12.821243 30243 controller.go:118] kubelet config controller: validating combination of defaults and flags
Feb 24 20:15:12 ip-172-31-0-250 kubelet[30243]: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: kubelet.service: Unit entered failed state.
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 24 20:15:23 ip-172-31-0-250 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Feb 24 20:15:23 ip-172-31-0-250 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Feb 24 20:15:23 ip-172-31-0-250 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 24 20:15:23 ip-172-31-0-250 kubelet[30304]: I0224 20:15:23.186834 30304 feature_gate.go:226] feature gates: &{{} map[]}
Feb 24 20:15:23 ip-172-31-0-250 kubelet[30304]: I0224 20:15:23.187255 30304 controller.go:114] kubelet config controller: starting controller
Feb 24 20:15:23 ip-172-31-0-250 kubelet[30304]: I0224 20:15:23.187451 30304 controller.go:118] kubelet config controller: validating combination of defaults and flags
Feb 24 20:15:23 ip-172-31-0-250 kubelet[30304]: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Feb 24 20:15:23 ip-172-31-0-250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 20:15:23 ip-172-31-0-250 systemd[1]: kubelet.service: Unit entered failed state.
Feb 24 20:15:23 ip-172-31-0-250 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 24 20:15:33 ip-172-31-0-250 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Feb 24 20:15:33 ip-172-31-0-250 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Feb 24 20:15:33 ip-172-31-0-250 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 24 20:15:33 ip-172-31-0-250 kubelet[30311]: I0224 20:15:33.422948 30311 feature_gate.go:226] feature gates: &{{} map[]}
Feb 24 20:15:33 ip-172-31-0-250 kubelet[30311]: I0224 20:15:33.423349 30311 controller.go:114] kubelet config controller: starting controller
Feb 24 20:15:33 ip-172-31-0-250 kubelet[30311]: I0224 20:15:33.423525 30311 controller.go:118] kubelet config controller: validating combination of defaults and flags
Feb 24 20:15:33 ip-172-31-0-250 kubelet[30311]: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Feb 24 20:15:33 ip-172-31-0-250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 20:15:33 ip-172-31-0-250 systemd[1]: kubelet.service: Unit entered failed state.
Feb 24 20:15:33 ip-172-31-0-250 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 24 20:15:43 ip-172-31-0-250 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Feb 24 20:15:43 ip-172-31-0-250 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Feb 24 20:15:43 ip-172-31-0-250 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 24 20:15:43 ip-172-31-0-250 kubelet[30319]: I0224 20:15:43.671742 30319 feature_gate.go:226] feature gates: &{{} map[]}
Feb 24 20:15:43 ip-172-31-0-250 kubelet[30319]: I0224 20:15:43.672195 30319 controller.go:114] kubelet config controller: starting controller
UPD:
Error is on aws ec2 instance side, but i can't find what is wrong.
PROBLEM SOLVED
Should initialize kubeadm with this flag --apiserver-advertise-address
After you create load balancer you need to type this command for show you external load balancer ip address
gcloud compute forwarding-rules describe kubernetes-forward
And the initialize cluster with this flag
--apiserver-advertise-address=external_load_balancer_ip
So your kubeadm command looks like this
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=external_load_balancer_ip

Kubespray: [Errno 111] Connection refused>", "redirected": false, "status": -1, "url": "http://127.0.0.1:8080/healthz

I am new to Kubernetes and I am trying to deploy a Kubernetes cluster using Kubespray (https://github.com/kubernetes-incubator/kubespray
)
When I run ansible-playbook -bi inventory/inventory cluster.yml it fails with:
fatal: [kube-k8s-master-1]: FAILED! => {"attempts": 20, "changed":
false, "content": "", "failed": true, "msg": "Status code was not
[200]: Request failed: ", "redirected": false, "status": -1, "url":
"http ://127.0.0.1:8080/healthz"}
etcd service log:
Oct 05 13:53:06 kube-k8s-master-1 systemd[1]: etcd.service: main
process exited, code=exited, status=1/FAILURE Oct 05 13:53:11
kube-k8s-master-1 docker[32146]: etcd1 Oct 05 13:53:11
kube-k8s-master-1 systemd[1]: Unit etcd.service entered failed state.
Oct 05 13:53:11 kube-k8s-master-1 systemd[1]: etcd.service failed.
kubelet service log:
Oct 05 13:56:02 kube-k8s-master-1 kubelet[19938]: E1005
13:56:02.500377 19938 reflector.go:190]
k8s.io/kubernetes/pkg/kubelet/kubelet.go:408: Failed to list *v1.Node:
Get https://127.0.0.1:6443/api/v1/nodes?fieldSelector=metadata.name%3
Oct 05 13:56:03 kube-k8s-master-1 kubelet[19938]: W1005
13:56:03.190059 19938 container.go:352] Failed to create summary
reader for
"/docker/ce8dbd1a8edfbe6b604aab4f38eff406846b1cfc8858ba23e7db5cac36d2247d":
none of the resources are be Oct 05 13:56:03 kube-k8s-master-1
kubelet[19938]: E1005 13:56:03.497076 19938 reflector.go:190]
k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list
*v1.Pod: Get https ://127.0.0.1:6443/api/v1/pods?fieldSelector=spec.node Oct 05
13:56:03 kube-k8s-master-1 kubelet[19938]: E1005 13:56:03.499795
19938 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:400:
Failed to list *v1.Service: Get
https ://127.0.0.1:6443/api/v1/services?resourceVersion=0: dial Oct 05
13:56:03 kube-k8s-master-1 kubelet[19938]: E1005 13:56:03.500800
19938 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:408:
Failed to list *v1.Node: Get
https ://127.0.0.1:6443/api/v1/nodes?fieldSelector=metadata.name%3 Oct
05 13:56:04 kube-k8s-master-1 kubelet[19938]: W1005 13:56:04.155665
19938 cni.go:189] Unable to update cni config: No networks found in
/etc/cni/net.d Oct 05 13:56:04 kube-k8s-master-1 kubelet[19938]: E1005
13:56:04.155825 19938 kubelet.go:2136] Container runtime network not
ready: NetworkReady=false reason:NetworkPluginNotReady message:docker:
network plugin is not ready: cni config Oct 05 13:56:04
kube-k8s-master-1 kubelet[19938]: E1005 13:56:04.190285 19938
eviction_manager.go:238] eviction manager: unexpected err: failed
GetNode: node 'kube-k8s-master-1' not found
How can I fix this?