I am using an HAProxy 1.6.4 to load balance my traffic into different servers. As in now my haproxy runs inside a docker container. In my configuration i have different frontends and backends. As in my testing my frontend backends works and forwards traffic to my desired request.
But the problem is some of my traffic is getting CONN_RESET error. and by viewing haproxy logs i found that my fronends and backends are continously restarting. the log looks like this.
local0.warn: Apr 15 12:30:36 haproxy[32715]: Stopping backend default-backend in 0 ms.
local0.warn: Apr 15 12:30:36 haproxy[32715]: Stopping backend default-backend in 0 ms.
local0.warn: Apr 15 12:30:36 haproxy[32715]: Stopping frontend http-frontend in 0 ms.
local0.warn: Apr 15 12:30:36 haproxy[32715]: Stopping frontend http-frontend in 0 ms.
local0.warn: Apr 15 12:30:36 haproxy[32715]: Stopping backend http-service-vbv7kk in 0 ms.
local0.warn: Apr 15 12:30:36 haproxy[32715]: Stopping backend http-service-vbv7kk in 0 ms.
local0.warn: Apr 15 12:30:36 haproxy[32715]: Stopping backend http-service-tsah7b in 0 ms.
local0.notice: Apr 15 12:30:37 haproxy[32720]: Proxy default-backend started.
local0.notice: Apr 15 12:30:37 haproxy[32720]: Proxy default-backend started.
local0.notice: Apr 15 12:30:37 haproxy[32720]: Proxy http-frontend started.
local0.notice: Apr 15 12:30:37 haproxy[32720]: Proxy http-frontend started.
local0.notice: Apr 15 12:30:37 haproxy[32720]: Proxy http-service-vbv7kk started.
local0.notice: Apr 15 12:30:37 haproxy[32720]: Proxy http-service-vbv7kk started.
local0.notice: Apr 15 12:30:37 haproxy[32720]: Proxy http-service-orry6w started.
local0.notice: Apr 15 12:30:37 haproxy[32720]: Proxy tcp-frontend-key-3443 started.
local0.notice: Apr 15 12:30:37 haproxy[32720]: Proxy tcp-service-bzczrg started.
local0.notice: Apr 15 12:30:37 haproxy[32720]: Proxy tcp-service-hacbno started.
local0.notice: Apr 15 12:30:37 haproxy[32720]: Proxy tcp-service-hacbno started.
local0.warn: Apr 15 12:30:37 haproxy[32719]: Stopping backend default-backend in 0 ms.
local0.warn: Apr 15 12:30:37 haproxy[32719]: Stopping backend default-backend in 0 ms.
local0.warn: Apr 15 12:30:37 haproxy[32719]: Stopping frontend http-frontend in 0 ms.
local0.warn: Apr 15 12:30:37 haproxy[32719]: Stopping frontend http-frontend in 0 ms.
local0.warn: Apr 15 12:30:37 haproxy[32719]: Stopping backend http-service-tsah7b in 0 ms.
local0.warn: Apr 15 12:30:37 haproxy[32719]: Stopping backend http-service-tsah7b in 0 ms
the simplest configuration i had looked like this.
Now my question is what am i missing? why my connection gets reseted and why the restarting?
*Question asked in ServerFault here but looks like nobody is checking.
Related
When i command
journalctl -f
I can get below logs.
Jan 20 16:28:49 node1 kubelet[1237]: I0120 16:28:49.858522 1237 scope.go:110] "RemoveContainer" containerID="6ff68682a6151eaecce82b16ca6bbc23ce44e71aedd871e5816dec1989a6ac7d"
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.859688275+09:00" level=info msg="RemoveContainer for \"6ff68682a6151eaecce82b16ca6bbc23ce44e71aedd871e5816dec1989a6ac7d\""
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.866650422+09:00" level=info msg="RemoveContainer for \"6ff68682a6151eaecce82b16ca6bbc23ce44e71aedd871e5816dec1989a6ac7d\" returns successfully"
Jan 20 16:28:49 node1 kubelet[1237]: I0120 16:28:49.866961 1237 scope.go:110] "RemoveContainer" containerID="f205217c8ed1ca6303d9035e95584af96708d07600887d2b4254d1080389dfbd"
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.868036395+09:00" level=info msg="RemoveContainer for \"f205217c8ed1ca6303d9035e95584af96708d07600887d2b4254d1080389dfbd\""
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.872289374+09:00" level=info msg="RemoveContainer for \"f205217c8ed1ca6303d9035e95584af96708d07600887d2b4254d1080389dfbd\" returns successfully"
Jan 20 16:28:49 node1 kubelet[1237]: I0120 16:28:49.872457 1237 scope.go:110] "RemoveContainer" containerID="b517943a97621ec70c3bbf95d4e6caa9c109ceba19eb013abdfeb252682db634"
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.873342572+09:00" level=info msg="RemoveContainer for \"b517943a97621ec70c3bbf95d4e6caa9c109ceba19eb013abdfeb252682db634\""
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.877366463+09:00" level=info msg="RemoveContainer for \"b517943a97621ec70c3bbf95d4e6caa9c109ceba19eb013abdfeb252682db634\" returns successfully"
Jan 20 16:28:49 node1 containerd[1012]: time="2023-01-20T16:28:49.879261150+09:00" level=info msg="StopPodSandbox for \"b08cbdab3744ff66a176a26643e59ec6b925082af7802dc9ea8dea29b6695331\""
However, when i command with u option
journalctl -u kubelet -f
I can't get recent logs.
I just can get logs 1 days before
Jan 19 03:01:32 nodekubelet[1237]: I0119 03:01:32.192834 1237 event.go:294] "Event occurred" object="kube-system/nginx-proxy-node" fieldPath="" kind="Pod" apiVersion="v1" type="Normal" reason="SandboxChanged" message="Pod sandbox changed, it will be killed and re-created."
Jan 19 03:01:32 nodekubelet[1237]: I0119 03:01:32.212913 1237 reflector.go:255] Listing and watching *v1.Service from vendor/k8s.io/client-go/informers/factory.go:134
Jan 19 03:01:32 nodekubelet[1237]: E0119 03:01:32.213013 1237 kubelet.go:2424] "Error getting node" err="node \"node\" not found"
Jan 19 03:01:32 nodekubelet[1237]: W0119 03:01:32.213540 1237 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://localhost:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
Jan 19 03:01:32 nodekubelet[1237]: E0119 03:01:32.213598 1237 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://localhost:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:6443: connect: connection refused
Why can't i get entire logs(journalctl -f) about kubelet?
What is difference between two commands?
I am setting postgresql loadbalance using Haproxy and I met a error messages as below:
Jun 30 07:57:43 vm0 systemd[1]: Starting HAProxy Load Balancer...
Jun 30 07:57:43 vm0 haproxy[15084]: [ALERT] 180/075743 (15084) : Starting proxy ReadWrite: cannot bind socket [0.0.0.0:8081]
Jun 30 07:57:43 vm0 haproxy[15084]: [ALERT] 180/075743 (15084) : Starting proxy ReadOnly: cannot bind socket [0.0.0.0:8082]
Jun 30 07:57:43 vm0 systemd[1]: haproxy.service: Main process exited, code=exited, status=1/FAILURE
Jun 30 07:57:43 vm0 systemd[1]: haproxy.service: Failed with result 'exit-code'.
Jun 30 07:57:43 vm0 systemd[1]: Failed to start HAProxy Load Balancer.
the below is my haproxy.cfg file and I kept checking all the possiblilites but I couldn't find the reason why I have the error. actualy I check the port is already used but no other process use the port 8001, 8002
-- haproxy.cfg
listen ReadWrite
bind *:8081
option httpchk
http-check expect status 200
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server pg1 pg1:5432 maxconn 100 check port 23267
listen ReadOnly
bind *:8082
option httpchk
http-check expect status 206
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server pg2 pg1:5432 maxconn 100 check port 23267
server pg3 pg2:5432 maxconn 100 check port 23267
kubeadm init seems to be hanging when I started using vsphere cloud provider. Followed instructions from here - Anybody got it working with 1.9?
root#master-0:~# kubeadm init --config /tmp/kube.yaml
[init] Using Kubernetes version: v1.9.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING Hostname]: hostname "master-0" could not be reached
[WARNING Hostname]: hostname "master-0" lookup master-0 on 8.8.8.8:53: no such host
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master-0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.11.0.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
Master os details
root#master-0:~# uname -r
4.4.0-21-generic
root#master-0:~# docker version
Client:
Version: 17.03.2-ce
API version: 1.27
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 03:35:14 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.2-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 03:35:14 2017
OS/Arch: linux/amd64
Experimental: false
root#master-0:~# cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04 LTS (Xenial Xerus)"
ID=ubuntu
kubelet service seems to be running fine
root#master-0:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf, 90-local-extras.conf
Active: active (running) since Mon 2018-01-22 11:25:00 UTC; 13min ago
Docs: http://kubernetes.io/docs/
Main PID: 4270 (kubelet)
Tasks: 13 (limit: 512)
Memory: 37.6M
CPU: 11.626s
CGroup: /system.slice/kubelet.service
└─4270 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeco
nfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10
--cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.cr
t --cadvisor-port=0 --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki
journalctl -f -u kubelet has some connection refused errors which probably networking service is missing. Those errors should go away when networking service is installed post kubeadm init
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.759764 1184 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.761350 1184 controller.go:114] kubelet confi
g controller: starting controller
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.762632 1184 controller.go:118] kubelet confi
g controller: validating combination of defaults and flags
Jan 22 11:17:46 localhost systemd[1]: Started Kubernetes systemd probe.
Jan 22 11:17:46 localhost kubelet[1184]: W0122 11:17:46.070619 1184 cni.go:171] Unable to update cni
config: No networks found in /etc/cni/net.d
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.081384 1184 server.go:182] Version: v1.9.1
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.081417 1184 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.082271 1184 plugins.go:101] No cloud provide
r specified.
Jan 22 11:17:46 localhost kubelet[1184]: error: failed to run Kubelet: unable to load bootstrap kubecon
fig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILU
RE
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Unit entered failed state.
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 22 11:17:56 localhost systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Jan 22 11:17:56 localhost systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jan 22 11:17:56 localhost systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.410883 1229 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.411198 1229 controller.go:114] kubelet confi
g controller: starting controller
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.411353 1229 controller.go:118] kubelet confi
g controller: validating combination of defaults and flags
Jan 22 11:17:56 localhost systemd[1]: Started Kubernetes systemd probe.
Jan 22 11:17:56 localhost kubelet[1229]: W0122 11:17:56.424264 1229 cni.go:171] Unable to update cni
config: No networks found in /etc/cni/net.d
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429102 1229 server.go:182] Version: v1.9.1
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429156 1229 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429247 1229 plugins.go:101] No cloud provide
r specified.
Jan 22 11:17:56 localhost kubelet[1229]: E0122 11:17:56.461608 1229 certificate_manager.go:314] Fail
ed while requesting a signed certificate from the master: cannot create certificate signing request: Po
st https://10.11.0.101:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: dial tcp 10.11
.0.101:6443: getsockopt: connection refused
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.491374 1229 server.go:428] --cgroups-per-qos
enabled, but --cgroup-root was not specified. defaulting to /
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.492069 1229 container_manager_linux.go:242]
container manager verified user specified cgroup-root exists: /
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.492102 1229 container_manager_linux.go:247]
Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: Kubelet
CgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootD
ir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemRe
servedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvict
ionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeri
od:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1
} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Pe
rcentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quan
tity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] Experiment
alCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
docker ps, controller & scheduler logs
root#master-0:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6db549891439 677911f7ae8f "kube-scheduler --..." About an hour ago Up About an hour k8s_kube-scheduler_kube-scheduler-master-0_kube-system_df32e281019039e73be77e3f53d09596_0
4f7ddefbd86e 4978f9a64966 "kube-controller-m..." About an hour ago Up About an hour k8s_kube-controller-manager_kube-controller-manager-master-0_kube-system_34bad395be69e74db6304d6c4218c536_0
18604db89db6 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-scheduler-master-0_kube-system_df32e281019039e73be77e3f53d09596_0
252b86ea4b5e gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-controller-manager-master-0_kube-system_34bad395be69e74db6304d6c4218c536_0
4021061bf8a6 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-apiserver-master-0_kube-system_7a3ae9279d0ca7b4ada8333fbe7442ed_0
4f94163d313b gcr.io/google_containers/etcd-amd64:3.1.10 "etcd --name=etcd0..." About an hour ago Up About an hour 0.0.0.0:2379-2380->2379-2380/tcp, 0.0.0.0:4001->4001/tcp, 7001/tcp etcd
root#master-0:~# docker logs -f 4f7ddefbd86e
I0122 11:25:06.253706 1 controllermanager.go:108] Version: v1.9.1
I0122 11:25:06.258712 1 leaderelection.go:174] attempting to acquire leader lease...
E0122 11:25:06.259448 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:09.711377 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:13.969270 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:17.564964 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:20.616174 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
root#master-0:~# docker logs -f 6db549891439
W0122 11:25:06.285765 1 server.go:159] WARNING: all flags than --config are deprecated. Please begin using a config file ASAP.
I0122 11:25:06.292865 1 server.go:551] Version: v1.9.1
I0122 11:25:06.295776 1 server.go:570] starting healthz server on 127.0.0.1:10251
E0122 11:25:06.295947 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: Get https://10.11.0.101:6443/apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296027 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: Get https://10.11.0.101:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296092 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: Get https://10.11.0.101:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296160 1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: Get https://10.11.0.101:6443/api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296218 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: Get https://10.11.0.101:6443/apis/apps/v1beta1/statefulsets?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.297374 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: Get https://10.11.0.101:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
There was a bug in the controller manager when starting with the vsphere cloud provider. See https://github.com/kubernetes/kubernetes/issues/57279, fixed in 1.9.2
We have a 2 nodes cluster running on GKE and for a long time we have been suffering random downtimes due what appears as kubelet service random restarts.
These are the logs for the last downtime in the node that was running the kubernetes system pods at that time:
sudo journalctl -r -u kubelet
Aug 22 04:17:36 gke-app-node1 systemd[1]: Started Kubernetes kubelet.
Aug 22 04:17:36 gke-app-node1 systemd[1]: Stopped Kubernetes kubelet.
Aug 22 04:17:36 gke-app-node1 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Aug 22 04:10:46 gke-app-node1 systemd[1]: Started Kubernetes kubelet.
Aug 22 04:10:46 gke-app-node1 systemd[1]: Stopped Kubernetes kubelet.
Aug 22 04:10:46 gke-app-node1 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Aug 22 04:09:48 gke-app-node1 systemd[1]: Started Kubernetes kubelet.
Aug 22 04:09:46 gke-app-node1 systemd[1]: Stopped Kubernetes kubelet.
Aug 22 04:09:44 gke-app-node1 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Aug 22 04:02:05 gke-app-node1 systemd[1]: Started Kubernetes kubelet.
Aug 22 04:02:03 gke-app-node1 systemd[1]: Stopped Kubernetes kubelet.
Aug 22 04:02:03 gke-app-node1 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Aug 22 04:01:09 gke-app-node1 systemd[1]: Started Kubernetes kubelet.
Aug 22 04:01:09 gke-app-node1 systemd[1]: Stopped Kubernetes kubelet.
Aug 22 04:01:08 gke-app-node1 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Aug 22 04:00:58 gke-app-node1 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Aug 22 04:00:58 gke-app-node1 systemd[1]: kubelet.service: Unit entered failed state.
Aug 22 04:00:58 gke-app-node1 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Aug 22 04:00:58 gke-app-node1 kubelet[1330]: I0822 04:00:58.104306 1330 server.go:794] GET /healthz: (5.286840082s) 200 [[curl/7.51.0] 127.0.0.1:35924]
Aug 22 04:00:58 gke-app-node1 kubelet[1330]: I0822 04:00:57.981923 1330 docker_server.go:73] Stop docker server
Aug 22 04:00:58 gke-app-node1 kubelet[1330]: I0822 04:00:53.991354 1330 server.go:794] GET /healthz: (5.834296581s) 200 [[Go-http-client/1.1] 127.0.0.1:35926]
Aug 22 04:00:57 gke-app-node1 kubelet[1330]: I0822 04:00:42.636036 1330 fsHandler.go:131] du and find on following dirs took 16.466105259s: [/var/lib/docker/overlay/e496194dfcb8a053050a0eb73965f57b109fe3036c1ffc5b0f12b4a341f13794 /var/lib/docker/containers/b6a212aedf588a1f1d173fd9f4871f678d014e260e8aa6147ad8212619675802]
Aug 22 04:00:39 gke-app-node1 kubelet[1330]: I0822 04:00:38.061492 1330 fsHandler.go:131] du and find on following dirs took 12.246559762s: [/var/lib/docker/overlay/303dc4c5814a0a12a6ac450e5b27327f55a7baa8000c011bd38521f3ff997e0f /var/lib/docker/containers/18a95beaf86b382bb8abc6ee40033020de1da4b54a5ca52e1c61bf7f14d6ef44]
Aug 22 04:00:39 gke-app-node1 kubelet[1330]: I0822 04:00:38.476930 1330 fsHandler.go:131] du and find on following dirs took 11.766408434s: [/var/lib/docker/overlay/86802dda255243388ab86fa8fc403187193f8c4ccdee54d8ca18c021ca35bc36 /var/lib/docker/containers/7fd4d507ec6445035fcb4a60efd4ae68e54052c1cace3268be72954062fed830]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:35.865293 1330 prober.go:106] Readiness probe for "web-deployment-812924635-ntcqw_default(bcf76fb6-8661-11e7-88da-42010a840211):rails-app" failed (failure): Get http://10.48.1.7:80/health_check: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: W0822 04:00:35.380953 1330 prober.go:98] No ref for container "docker://e2dcca90c091d2789af9b22e1405cb273f63c399aecde2686ef4b1e8ab9fdc5f" (web-deployment-812924635-ntcqw_default(bcf76fb6-8661-11e7-88da-42010a840211):rails-app)
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:32.514573 1330 fsHandler.go:131] du and find on following dirs took 7.127181023s: [/var/lib/docker/overlay/647f419bce585a3d0f5792376b269704cb358828bc5c4fb5e815bfa23950d511 /var/lib/docker/containers/59f7ada601f38a243daa7154f2ed27790d14d163c4675e26186d9a6d9db0272e]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:32.255644 1330 fsHandler.go:131] du and find on following dirs took 6.72357089s: [/var/lib/docker/overlay/992a65b68531c5ac53e4cd06f7a8f8abe4b908d943b5b9cc38da126b469050b2 /var/lib/docker/containers/2be7aede380d6f3452a5abacc53f9e0a69f8c5ee3dbdf5351a30effdf2d47833]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:32.067601 1330 fsHandler.go:131] du and find on following dirs took 6.511405985s: [/var/lib/docker/overlay/7bc4e00d232b4a22eb64a87ad079970aabb24bde17d3adaa6989840ebc91b96c /var/lib/docker/containers/949c778861a4f86440c5dd21d4daf40e97fb49b9eb1498111d7941ca3e63541a]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:31.907928 1330 fsHandler.go:131] du and find on following dirs took 6.263993478s: [/var/lib/docker/overlay/303abc540335a9ce7077fd21182845fbff2f06ed9eb1ac8af9effdfd048153b5 /var/lib/docker/containers/6544add2796f365d67d72fe283e083042aa2af82862eb6335295d228efa28d61]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:31.907845 1330 fsHandler.go:131] du and find on following dirs took 7.630063774s: [/var/lib/docker/overlay/a36c376a7ddd04c168822770866d8c34499ddec7e4039ada579b3d65adc57347 /var/lib/docker/containers/6a606a6c901f8373dff22f94ba77a24956a7b4eed3d0e550be168eeaeed86236]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:31.902731 1330 fsHandler.go:131] du and find on following dirs took 6.259025553s: [/var/lib/docker/overlay/0a7170e1a42bfa8b0112d8c7bb805da8e4778aa5ce90978d90ed5335929633ff /var/lib/docker/containers/1f68eaa59cab0a0bcdc087e25d18573044b599967a56867d189acd82bc19172b]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:31.871796 1330 fsHandler.go:131] du and find on following dirs took 6.410999589s: [/var/lib/docker/overlay/25ffbf8bd71e814af8991cc52499286d2d345b3f348fec9358ca366f341ed588 /var/lib/docker/containers/efe1969587c9b0412fe7f7c8c24bbe1326d46f576bddf12f88ae7cd406b6475d]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:31.871699 1330 fsHandler.go:131] du and find on following dirs took 6.259940483s: [/var/lib/docker/overlay/56909c00ec20b59c1fcb4988cd51fe50ebb467681f37bab3f9061d76993565bc /var/lib/docker/containers/a8d1df672c23313313b511389f6eeb44e78c3f9e4c346d214fc190695f270e5f]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:31.614518 1330 fsHandler.go:131] du and find on following dirs took 5.982313751s: [/var/lib/docker/overlay/cb057acc4f3a3e91470847f78ffd550b25a24605cec42ee080aaf193933968cf /var/lib/docker/containers/e755c4d88e4e5d4d074806e829b1e83fd52c8e2b1c01c27131222a40b0c6c10a]
Aug 22 04:00:35 gke-app-node1 kubelet[1330]: I0822 04:00:31.837000 1330 fsHandler.go:131] du and find on following dirs took 7.500602734s: [/var/lib/docker/overlay/e9539d9569ccdcc79db1cd4add7036d70ad71391dc30ca16903bdd9bda4d0972 /var/lib/docker/containers/b0a7c955af1ed85f56aeaed1d787794d5ffd04c2a81820465a1e3453242c8a19]
Aug 22 04:00:34 gke-app-node1 kubelet[1330]: I0822 04:00:31.836947 1330 fsHandler.go:131] du and find on following dirs took 6.257091389s: [/var/lib/docker/overlay/200f0f063157381d25001350c34914e020ea16b3f82f7bedf7e4b01d34e513a7 /var/lib/docker/containers/eca7504b7e24332381e459a2f09acc150a5681c148cebc5867ac66021cbe0435]
Aug 22 04:00:33 gke-app-node1 kubelet[1330]: I0822 04:00:31.836787 1330 fsHandler.go:131] du and find on following dirs took 7.286756684s: [/var/lib/docker/overlay/37334712f505b11c7f0b27fb0580eadc0e79fc789dcfafbea1730efd500fb69c /var/lib/docker/containers/4858388c53032331868497859110a7267fef95110a7ab3664aa857a21ee02a3e]
Aug 22 04:00:22 gke-app-node1 kubelet[1330]: I0822 04:00:21.999930 1330 qos_container_manager_linux.go:286] [ContainerManager]: Updated QoS cgroup configuration
Aug 22 04:00:20 gke-app-node1 kubelet[1330]: I0822 04:00:19.598974 1330 server.go:794] GET /healthz: (136.991429ms) 200 [[curl/7.51.0] 127.0.0.1:35888]
Aug 22 04:00:10 gke-app-node1 kubelet[1330]: I0822 04:00:08.024328 1330 server.go:794] GET /healthz: (36.191534ms) 200 [[curl/7.51.0] 127.0.0.1:35868]
Aug 22 04:00:05 gke-app-node1 kubelet[1330]: I0822 04:00:05.861339 1330 server.go:794] GET /stats/summary/: (808.201834ms) 200 [[Go-http-client/1.1] 10.48.0.7:43022]
Aug 22 04:00:03 gke-app-node1 kubelet[1330]: W0822 04:00:02.723586 1330 raw.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/kube-logrotate.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/kube-logrotate.service: no such file or directory
Aug 22 04:00:03 gke-app-node1 kubelet[1330]: W0822 04:00:02.723529 1330 raw.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/kube-logrotate.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/kube-logrotate.service: no such file or directory
Aug 22 04:00:03 gke-app-node1 kubelet[1330]: W0822 04:00:02.622765 1330 raw.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/kube-logrotate.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/kube-logrotate.service: no such file or directory
sudo dmesg -T
[Tue Aug 22 04:17:31 2017] cbr0: port 4(vethd18b0189) entered disabled state
[Tue Aug 22 04:17:31 2017] device vethd18b0189 left promiscuous mode
[Tue Aug 22 04:17:31 2017] cbr0: port 4(vethd18b0189) entered disabled state
[Tue Aug 22 04:17:40 2017] cbr0: port 6(veth2985149d) entered disabled state
[Tue Aug 22 04:17:40 2017] device veth2985149d left promiscuous mode
[Tue Aug 22 04:17:40 2017] cbr0: port 6(veth2985149d) entered disabled state
[Tue Aug 22 04:17:42 2017] cbr0: port 5(veth2a1d2827) entered disabled state
[Tue Aug 22 04:17:42 2017] device veth2a1d2827 left promiscuous mode
[Tue Aug 22 04:17:42 2017] cbr0: port 5(veth2a1d2827) entered disabled state
[Tue Aug 22 04:17:42 2017] cbr0: port 2(vetha070fbca) entered disabled state
[Tue Aug 22 04:17:42 2017] device vetha070fbca left promiscuous mode
[Tue Aug 22 04:17:42 2017] cbr0: port 2(vetha070fbca) entered disabled state
[Tue Aug 22 04:17:42 2017] cbr0: port 3(veth7e3e663a) entered disabled state
[Tue Aug 22 04:17:42 2017] device veth7e3e663a left promiscuous mode
[Tue Aug 22 04:17:42 2017] cbr0: port 3(veth7e3e663a) entered disabled state
[Tue Aug 22 04:17:57 2017] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[Tue Aug 22 04:17:57 2017] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[Tue Aug 22 04:17:57 2017] device veth215e85fc entered promiscuous mode
[Tue Aug 22 04:17:57 2017] cbr0: port 1(veth215e85fc) entered forwarding state
[Tue Aug 22 04:17:57 2017] cbr0: port 1(veth215e85fc) entered forwarding state
[Tue Aug 22 04:18:12 2017] cbr0: port 1(veth215e85fc) entered forwarding state
And finally here we can see how the Kubernetes pods were killed around that time:
docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d7b59cd7116c gcr.io/google_containers/pause-amd64:3.0 "/pause" 4 hours ago Up 4 hours k8s_POD_worker-deployment-3042507874-ghjpd_default_1de2291b-86ef-11e7-88da-42010a840211_0
e6f2563ac3c8 5e65193af899 "/monitor --component" 4 hours ago Up 4 hours k8s_prometheus-to-sd-exporter_fluentd-gcp-v2.0-5vlrc_kube-system_6cebabe5-84f9-11e7-88da-42010a840211_1
45d7886d0308 487e99ee05d9 "/bin/sh -c '/run.sh " 4 hours ago Up 4 hours k8s_fluentd-gcp_fluentd-gcp-v2.0-5vlrc_kube-system_6cebabe5-84f9-11e7-88da-42010a840211_1
b5a5c5d085ac gcr.io/google_containers/pause-amd64:3.0 "/pause" 4 hours ago Up 4 hours k8s_POD_fluentd-gcp-v2.0-5vlrc_kube-system_6cebabe5-84f9-11e7-88da-42010a840211_1
32dcd4d5847c 54d2a8698e3c "/bin/sh -c 'echo -99" 4 hours ago Up 4 hours k8s_kube-proxy_kube-proxy-gke-app-node1_kube-system_ed40100d42c9e285fa1f59ca7a1d8f8d_1
2d055d96b610 gcr.io/google_containers/pause-amd64:3.0 "/pause" 4 hours ago Up 4 hours k8s_POD_kube-proxy-gke-app-node1_kube-system_ed40100d42c9e285fa1f59ca7a1d8f8d_1
2be7aede380d 5e65193af899 "/monitor --component" 2 days ago Exited (0) 4 hours ago k8s_prometheus-to-sd-exporter_fluentd-gcp-v2.0-5vlrc_kube-system_6cebabe5-84f9-11e7-88da-42010a840211_0
7fd4d507ec64 54d2a8698e3c "/bin/sh -c 'echo -99" 2 days ago Exited (0) 4 hours ago k8s_kube-proxy_kube-proxy-gke-app-node1_kube-system_ed40100d42c9e285fa1f59ca7a1d8f8d_0
cc615ec1e87c efe10ee6727f "/bin/touch /run/xtab" 2 days ago Exited (0) 2 days ago k8s_touch-lock_kube-proxy-gke-app-node1_kube-system_ed40100d42c9e285fa1f59ca7a1d8f8d_0
b0a7c955af1e 487e99ee05d9 "/bin/sh -c '/run.sh " 2 days ago Exited (0) 4 hours ago k8s_fluentd-gcp_fluentd-gcp-v2.0-5vlrc_kube-system_6cebabe5-84f9-11e7-88da-42010a840211_0
4858388c5303 gcr.io/google_containers/pause-amd64:3.0 "/pause" 2 days ago Exited (0) 4 hours ago k8s_POD_fluentd-gcp-v2.0-5vlrc_kube-system_6cebabe5-84f9-11e7-88da-42010a840211_0
6a606a6c901f gcr.io/google_containers/pause-amd64:3.0 "/pause" 2 days ago Exited (0) 4 hours ago k8s_POD_kube-proxy-gke-app-node1_kube-system_ed40100d42c9e285fa1f59ca7a1d8f8d_0
Our cluster is running kubernetes 1.7.3 both for master and the node pools, and it's on GKE (europe-west1-d zone).
Any help would be appreciated as we don't really know how to debug this problem further.
the service is not starting and the listener is not activated on port 8080.
here is my kubernetes configuration:
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://centos-master:8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"
systemctl status kube-apiserver -l
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Mon 2017-08-14 12:07:04 +0430; 29s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 2087 ExecStart=/usr/bin/kube-apiserver $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBE_ETCD_SERVERS $KUBE_API_ADDRESS $KUBE_API_PORT $KUBELET_PORT $KUBE_ALLOW_PRIV $KUBE_SERVICE_ADDRESSES $KUBE_ADMISSION_CONTROL $KUBE_API_ARGS (code=exited, status=2)
Main PID: 2087 (code=exited, status=2)
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service: main process exited, code=exited, status=2/INVALIDARGUMENT
Aug 14 12:07:04 centos-master systemd[1]: Failed to start Kubernetes API Server.
Aug 14 12:07:04 centos-master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service failed.
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service holdoff time over, scheduling restart.
Aug 14 12:07:04 centos-master systemd[1]: start request repeated too quickly for kube-apiserver.service
Aug 14 12:07:04 centos-master systemd[1]: Failed to start Kubernetes API Server.
Aug 14 12:07:04 centos-master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service failed.
tail -n 1000 /var/log/messages
resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.240160 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466: Failed to list *api.PersistentVolume: Get http://centos-master:8080/api/v1/persistentvolumes?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.242039 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:470: Failed to list *api.Service: Get http://centos-master:8080/api/v1/services?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.242924 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:457: Failed to list *api.Pod: Get http://centos-master:8080/api/v1/pods?fieldSelector=spec.nodeName%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.269386 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:473: Failed to list *api.ReplicationController: Get http://centos-master:8080/api/v1/replicationcontrollers?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.285782 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481: Failed to list *extensions.ReplicaSet: Get http://centos-master:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.286529 606 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.PersistentVolumeClaim: Get http://centos-master:8080/api/v1/persistentvolumeclaims?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
systemd[1]: kube-apiserver.service: main process exited, code=exited, status=2/INVALIDARGUMENT
The arguments you're using do not seem valid.
Check the list of valid arguments here.
You can also follow the Kubernetes The Hard Way guide for a trusted way to run the API server.