HAProxy for postgresql Load Balancer start error - cannot bind socket - postgresql

I am setting postgresql loadbalance using Haproxy and I met a error messages as below:
Jun 30 07:57:43 vm0 systemd[1]: Starting HAProxy Load Balancer...
Jun 30 07:57:43 vm0 haproxy[15084]: [ALERT] 180/075743 (15084) : Starting proxy ReadWrite: cannot bind socket [0.0.0.0:8081]
Jun 30 07:57:43 vm0 haproxy[15084]: [ALERT] 180/075743 (15084) : Starting proxy ReadOnly: cannot bind socket [0.0.0.0:8082]
Jun 30 07:57:43 vm0 systemd[1]: haproxy.service: Main process exited, code=exited, status=1/FAILURE
Jun 30 07:57:43 vm0 systemd[1]: haproxy.service: Failed with result 'exit-code'.
Jun 30 07:57:43 vm0 systemd[1]: Failed to start HAProxy Load Balancer.
the below is my haproxy.cfg file and I kept checking all the possiblilites but I couldn't find the reason why I have the error. actualy I check the port is already used but no other process use the port 8001, 8002
-- haproxy.cfg
listen ReadWrite
bind *:8081
option httpchk
http-check expect status 200
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server pg1 pg1:5432 maxconn 100 check port 23267
listen ReadOnly
bind *:8082
option httpchk
http-check expect status 206
default-server inter 3s fall 3 rise 2 on-marked-down shutdown-sessions
server pg2 pg1:5432 maxconn 100 check port 23267
server pg3 pg2:5432 maxconn 100 check port 23267

Related

mongodb is not accessing from outside of the server

Steps followed:
updated bindIp in the /etc/mongod.conf file
record entered in the iptables
root#:/var/log/mongodb# iptables -L -n -v
=============================================================
Chain INPUT (policy DROP 108K packets, 5349K bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- * * 194.195.119.119 0.0.0.0/0 tcp dpt:27017 state NEW,ESTABLISHED
Chain OUTPUT (policy ACCEPT 1199 packets, 75946 bytes)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- * * 0.0.0.0/0 194.195.119.119 tcp spt:27017 state ESTABLISHED
allowed ufw :
root#:/var/log/mongodb# ufw status
===============================================
To Action From
-- ------ ----
22/tcp ALLOW Anywhere
27017 ALLOW Anywhere
22/tcp (v6) ALLOW Anywhere (v6)
27017 (v6) ALLOW Anywhere (v6)
bindIp works fine for 0.0.0.0 but unbale to start mongodb service when added actual ip ids
e.g. e.g. bindIp: 194.195.119.119
bindIp: 194.195.119.119,103.208.71.9
and getting below error on start :
root#:~# systemctl status mongod
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2022-07-11 09:02:37 UTC; 1s ago
Docs: https://docs.mongodb.org/manual
Process: 161544 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=48)
Main PID: 161544 (code=exited, status=48)
CPU: 365ms
Jul 11 09:02:37 .ip.linodeusercontent.com systemd[1]: Started MongoDB Database Server.
Jul 11 09:02:37 .ip.linodeusercontent.com systemd[1]: mongod.service: Main process exited, code=exited, status=48/n/a
Jul 11 09:02:37 .ip.linodeusercontent.com systemd[1]: mongod.service: Failed with result 'exit-code'.

kubectl logs command gives error if worker node has IPv6 address

I have a 3-node cluster (1 master + 2 worker). I am exploring dual-stack capabilities of Kubernetes (version 1.21). I have used kubeadm to initialize this cluster.
To begin with, I see IPv4 address of nodes in the this command : command - kubectl get nodes -o wide
Also, I can fetch logs of pods using the command: kubectl logs
Then, I configured all 3 kubelets with --node-ip=IPv6,IPv4. On master, if I run the command - kubectl get nodes -o wide , I can see IPv6 showing up for every node, which is fine.
But, on master, if I run the command to fetch logs of a pod, I get the error:
kubectl logs my-nginx-deploy-64fdd68f8d-klg78
Error from server: Get "https://[2001:420:db8:11::444:d1]:10250/containerLogs/default/my-nginx-deploy-64fdd68f8d-klg78/nginx": Gateway Timeout
Be great, if someone helps me resolve this.
Additional Information:
When running the kubectl command with -v=9
I0122 06:15:39.727678 23430 round_trippers.go:435] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.21.0 (linux/amd64) kubernetes/cb303e6" 'https://10.78.109.207:6443/api/v1/namespaces/default/pods/my-nginx-deploy-64fdd68f8d-klg78/log'
I0122 06:15:39.734842 23430 round_trippers.go:454] GET https://10.10.10.207:6443/api/v1/namespaces/default/pods/my-nginx-deploy-64fdd68f8d-klg78/log 500 Internal Server Error in 7 milliseconds
I0122 06:15:39.734855 23430 round_trippers.go:460] Response Headers:
I0122 06:15:39.734862 23430 round_trippers.go:463] Cache-Control: no-cache, private
I0122 06:15:39.734868 23430 round_trippers.go:463] Content-Type: application/json
I0122 06:15:39.734873 23430 round_trippers.go:463] Content-Length: 219
I0122 06:15:39.734879 23430 round_trippers.go:463] Date: Sat, 22 Jan 2022 11:15:39 GMT
I0122 06:15:39.734893 23430 request.go:1123] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Get \"https://[2001:420:db8:11::444:d1]:10250/containerLogs/default/my-nginx-deploy-64fdd68f8d-klg78/nginx\": Gateway Timeout","code":500}
I0122 06:15:39.735207 23430 helpers.go:216] server response object: [{
"metadata": {},
"status": "Failure",
"message": "Get \"https://[2001:420:db8:11::444:d1]:10250/containerLogs/default/my-nginx-deploy-64fdd68f8d-klg78/nginx\": Gateway Timeout",
"code": 500
}]
F0122 06:15:39.735240 23430 helpers.go:115] Error from server: Get "https://[2001:420:db8:11::444:d1]:10250/containerLogs/default/my-nginx-deploy-64fdd68f8d-klg78/nginx": Gateway Timeout
kubelet is listening on IPv6.
netstat -tulpn | grep kubelet
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 2734333/kubelet
tcp 0 0 127.0.0.1:3507 0.0.0.0:* LISTEN 2734333/kubelet
tcp6 0 0 :::10250 :::* LISTEN 2734333/kubelet
I try to simulate the HTTP request as probably performed by kube-apiserver and I am getting success (as shown below). But, not sure, why kubectl logs podname command is failing.
curl -k -g https://[2001:420:db8:11::444:d1]:10250/containerLogs/default/my-nginx-deploy-64fdd68f8d-klg78/nginx --cert /etc/kubernetes/pki/apiserver-kubelet-client.crt --key /etc/kubernetes/pki/apiserver-kubelet-client.key
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/01/21 11:46:44 [notice] 1#1: using the "epoll" event method
2022/01/21 11:46:44 [notice] 1#1: nginx/1.21.5
2022/01/21 11:46:44 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6)
2022/01/21 11:46:44 [notice] 1#1: OS: Linux 3.10.0-1062.1.2.el7.x86_64
2022/01/21 11:46:44 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2022/01/21 11:46:44 [notice] 1#1: start worker processes
2022/01/21 11:46:44 [notice] 1#1: start worker process 31
2022/01/21 11:46:44 [notice] 1#1: start worker process 32
2022/01/21 11:46:44 [notice] 1#1: start worker process 33
2022/01/21 11:46:44 [notice] 1#1: start worker process 34`
When I run kubelet with -v=8 and run send the HTTP request manually from master node (as shown in #3 above), I can see the request is received by kubelet (whose IPv6 address is 2001:420:db8:11::444:cf).
journalctl -u kubelet.service -f | grep -E "/var/lib/docker/containers|srcIP"
Jan 22 06:43:46 xyz-k8s-33 kubelet[2734333]: I0122 06:43:46.471490 2734333 logs.go:319] "Finished parsing log file" path="/var/lib/docker/containers/72a362c92adbe1315c9065cd2d2148cb775412f50edcd52dc21ed660e73209da/72a362c92adbe1315c9065cd2d2148cb775412f50edcd52dc21ed660e73209da-json.log"
Jan 22 06:43:46 xyz-k8s-33 kubelet[2734333]: I0122 06:43:46.471577 2734333 httplog.go:89] "HTTP" verb="GET" URI="/containerLogs/default/my-nginx-deploy-64fdd68f8d-klg78/nginx" latency="14.793967ms" userAgent="curl/7.29.0" srcIP="[2001:420:db8:11::444:cf]:49373" resp=200
But when I run the command kubectl logs podname, I do not see such logs on kubelet. This makes me feel, kube-apiserver did not sent the request at all??? It simply failed to do so.

kubeadm init 1.9 hangs with vsphere cloud provider

kubeadm init seems to be hanging when I started using vsphere cloud provider. Followed instructions from here - Anybody got it working with 1.9?
root#master-0:~# kubeadm init --config /tmp/kube.yaml
[init] Using Kubernetes version: v1.9.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING Hostname]: hostname "master-0" could not be reached
[WARNING Hostname]: hostname "master-0" lookup master-0 on 8.8.8.8:53: no such host
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master-0 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.11.0.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
Master os details
root#master-0:~# uname -r
4.4.0-21-generic
root#master-0:~# docker version
Client:
Version: 17.03.2-ce
API version: 1.27
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 03:35:14 2017
OS/Arch: linux/amd64
Server:
Version: 17.03.2-ce
API version: 1.27 (minimum version 1.12)
Go version: go1.7.5
Git commit: f5ec1e2
Built: Tue Jun 27 03:35:14 2017
OS/Arch: linux/amd64
Experimental: false
root#master-0:~# cat /etc/os-release
NAME="Ubuntu"
VERSION="16.04 LTS (Xenial Xerus)"
ID=ubuntu
kubelet service seems to be running fine
root#master-0:~# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf, 90-local-extras.conf
Active: active (running) since Mon 2018-01-22 11:25:00 UTC; 13min ago
Docs: http://kubernetes.io/docs/
Main PID: 4270 (kubelet)
Tasks: 13 (limit: 512)
Memory: 37.6M
CPU: 11.626s
CGroup: /system.slice/kubelet.service
└─4270 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeco
nfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10
--cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.cr
t --cadvisor-port=0 --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki
journalctl -f -u kubelet has some connection refused errors which probably networking service is missing. Those errors should go away when networking service is installed post kubeadm init
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.759764 1184 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.761350 1184 controller.go:114] kubelet confi
g controller: starting controller
Jan 22 11:17:45 localhost kubelet[1184]: I0122 11:17:45.762632 1184 controller.go:118] kubelet confi
g controller: validating combination of defaults and flags
Jan 22 11:17:46 localhost systemd[1]: Started Kubernetes systemd probe.
Jan 22 11:17:46 localhost kubelet[1184]: W0122 11:17:46.070619 1184 cni.go:171] Unable to update cni
config: No networks found in /etc/cni/net.d
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.081384 1184 server.go:182] Version: v1.9.1
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.081417 1184 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:46 localhost kubelet[1184]: I0122 11:17:46.082271 1184 plugins.go:101] No cloud provide
r specified.
Jan 22 11:17:46 localhost kubelet[1184]: error: failed to run Kubelet: unable to load bootstrap kubecon
fig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILU
RE
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Unit entered failed state.
Jan 22 11:17:46 localhost systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 22 11:17:56 localhost systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Jan 22 11:17:56 localhost systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Jan 22 11:17:56 localhost systemd[1]: Started kubelet: The Kubernetes Node Agent.
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.410883 1229 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.411198 1229 controller.go:114] kubelet confi
g controller: starting controller
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.411353 1229 controller.go:118] kubelet confi
g controller: validating combination of defaults and flags
Jan 22 11:17:56 localhost systemd[1]: Started Kubernetes systemd probe.
Jan 22 11:17:56 localhost kubelet[1229]: W0122 11:17:56.424264 1229 cni.go:171] Unable to update cni
config: No networks found in /etc/cni/net.d
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429102 1229 server.go:182] Version: v1.9.1
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429156 1229 feature_gate.go:220] feature gat
es: &{{} map[]}
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.429247 1229 plugins.go:101] No cloud provide
r specified.
Jan 22 11:17:56 localhost kubelet[1229]: E0122 11:17:56.461608 1229 certificate_manager.go:314] Fail
ed while requesting a signed certificate from the master: cannot create certificate signing request: Po
st https://10.11.0.101:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: dial tcp 10.11
.0.101:6443: getsockopt: connection refused
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.491374 1229 server.go:428] --cgroups-per-qos
enabled, but --cgroup-root was not specified. defaulting to /
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.492069 1229 container_manager_linux.go:242]
container manager verified user specified cgroup-root exists: /
Jan 22 11:17:56 localhost kubelet[1229]: I0122 11:17:56.492102 1229 container_manager_linux.go:247]
Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: Kubelet
CgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootD
ir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemRe
servedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvict
ionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeri
od:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1
} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Pe
rcentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quan
tity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] Experiment
alCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
docker ps, controller & scheduler logs
root#master-0:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6db549891439 677911f7ae8f "kube-scheduler --..." About an hour ago Up About an hour k8s_kube-scheduler_kube-scheduler-master-0_kube-system_df32e281019039e73be77e3f53d09596_0
4f7ddefbd86e 4978f9a64966 "kube-controller-m..." About an hour ago Up About an hour k8s_kube-controller-manager_kube-controller-manager-master-0_kube-system_34bad395be69e74db6304d6c4218c536_0
18604db89db6 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-scheduler-master-0_kube-system_df32e281019039e73be77e3f53d09596_0
252b86ea4b5e gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-controller-manager-master-0_kube-system_34bad395be69e74db6304d6c4218c536_0
4021061bf8a6 gcr.io/google_containers/pause-amd64:3.0 "/pause" About an hour ago Up About an hour k8s_POD_kube-apiserver-master-0_kube-system_7a3ae9279d0ca7b4ada8333fbe7442ed_0
4f94163d313b gcr.io/google_containers/etcd-amd64:3.1.10 "etcd --name=etcd0..." About an hour ago Up About an hour 0.0.0.0:2379-2380->2379-2380/tcp, 0.0.0.0:4001->4001/tcp, 7001/tcp etcd
root#master-0:~# docker logs -f 4f7ddefbd86e
I0122 11:25:06.253706 1 controllermanager.go:108] Version: v1.9.1
I0122 11:25:06.258712 1 leaderelection.go:174] attempting to acquire leader lease...
E0122 11:25:06.259448 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:09.711377 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:13.969270 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:17.564964 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:20.616174 1 leaderelection.go:224] error retrieving resource lock kube-system/kube-controller-manager: Get https://10.11.0.101:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager: dial tcp 10.11.0.101:6443: getsockopt: connection refused
root#master-0:~# docker logs -f 6db549891439
W0122 11:25:06.285765 1 server.go:159] WARNING: all flags than --config are deprecated. Please begin using a config file ASAP.
I0122 11:25:06.292865 1 server.go:551] Version: v1.9.1
I0122 11:25:06.295776 1 server.go:570] starting healthz server on 127.0.0.1:10251
E0122 11:25:06.295947 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.ReplicaSet: Get https://10.11.0.101:6443/apis/extensions/v1beta1/replicasets?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296027 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.ReplicationController: Get https://10.11.0.101:6443/api/v1/replicationcontrollers?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296092 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: Get https://10.11.0.101:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296160 1 reflector.go:205] k8s.io/kubernetes/plugin/cmd/kube-scheduler/app/server.go:590: Failed to list *v1.Pod: Get https://10.11.0.101:6443/api/v1/pods?fieldSelector=spec.schedulerName%3Ddefault-scheduler%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.296218 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.StatefulSet: Get https://10.11.0.101:6443/apis/apps/v1beta1/statefulsets?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
E0122 11:25:06.297374 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.PersistentVolume: Get https://10.11.0.101:6443/api/v1/persistentvolumes?limit=500&resourceVersion=0: dial tcp 10.11.0.101:6443: getsockopt: connection refused
There was a bug in the controller manager when starting with the vsphere cloud provider. See https://github.com/kubernetes/kubernetes/issues/57279, fixed in 1.9.2

Failed to start Kubernetes API Server duo to unknown reason

the service is not starting and the listener is not activated on port 8080.
here is my kubernetes configuration:
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://centos-master:8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"
systemctl status kube-apiserver -l
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Mon 2017-08-14 12:07:04 +0430; 29s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 2087 ExecStart=/usr/bin/kube-apiserver $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBE_ETCD_SERVERS $KUBE_API_ADDRESS $KUBE_API_PORT $KUBELET_PORT $KUBE_ALLOW_PRIV $KUBE_SERVICE_ADDRESSES $KUBE_ADMISSION_CONTROL $KUBE_API_ARGS (code=exited, status=2)
Main PID: 2087 (code=exited, status=2)
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service: main process exited, code=exited, status=2/INVALIDARGUMENT
Aug 14 12:07:04 centos-master systemd[1]: Failed to start Kubernetes API Server.
Aug 14 12:07:04 centos-master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service failed.
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service holdoff time over, scheduling restart.
Aug 14 12:07:04 centos-master systemd[1]: start request repeated too quickly for kube-apiserver.service
Aug 14 12:07:04 centos-master systemd[1]: Failed to start Kubernetes API Server.
Aug 14 12:07:04 centos-master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service failed.
tail -n 1000 /var/log/messages
resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.240160 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466: Failed to list *api.PersistentVolume: Get http://centos-master:8080/api/v1/persistentvolumes?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.242039 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:470: Failed to list *api.Service: Get http://centos-master:8080/api/v1/services?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.242924 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:457: Failed to list *api.Pod: Get http://centos-master:8080/api/v1/pods?fieldSelector=spec.nodeName%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.269386 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:473: Failed to list *api.ReplicationController: Get http://centos-master:8080/api/v1/replicationcontrollers?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.285782 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481: Failed to list *extensions.ReplicaSet: Get http://centos-master:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.286529 606 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.PersistentVolumeClaim: Get http://centos-master:8080/api/v1/persistentvolumeclaims?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
systemd[1]: kube-apiserver.service: main process exited, code=exited, status=2/INVALIDARGUMENT
The arguments you're using do not seem valid.
Check the list of valid arguments here.
You can also follow the Kubernetes The Hard Way guide for a trusted way to run the API server.

proxy Memcache_Servers has no server available

proxy Memcache_Servers has no server available, when I start the haproxy.service:
[root#ha-node1 log]# systemctl restart haproxy.service
Message from syslogd#localhost at Aug 2 10:49:23 ...
haproxy[81665]: proxy Memcache_Servers has no server available!
The configuration in my haproxy.cfg:
listen Memcache_Servers
bind 45.117.40.168:11211
balance roundrobin
mode tcp
option tcpka
server ha-node1 ha-node1:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
server ha-node2 ha-node2:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
server ha-node3 ha-node3:11211 check inter 10s fastinter 2s downinter 2s rise 30 fall 3
At last, I found the ip in my hosts is like below:
[root#ha-node1 sysconfig]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.8.101 ha-node1 ha-node1.aa.com
192.168.8.102 ha-node2 ha-node2.aa.com
192.168.8.103 ha-node3 ha-node3.aa.com
45.117.40.168 ha-vhost devops.aa.com
192.168.8.104 nfs-backend backend.aa.com
But in my /etc/sysconfig/memcached, the ip is not the host ip before, so I changed to the ip in the hosts:
Now I restart the memcached and haproxy, it works normal now.