Unable to start Kube-apiserver service - kubernetes

I am installing kubernetes the hardway by mumshad(https://github.com/mmumshad/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md). currently stuck at Bootstrapping the Kubernetes Control Plane phase, have followed the instructions in the document carefully but for some reason the kube-apiserver is not running and it is in auto restart state. Could any one of you help me on this. Same issue on both master nodes, however the kube-schedular and kube-control manager are running properly. Errors are provided below.
root#master-1:~# service kube-apiserver status.
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Sat 2020-02-22 09:28:07 UTC; 476ms ago
Docs: https://github.com/kubernetes/kubernetes
Process: 10656 ExecStart=/usr/local/bin/kube-apiserver --advertise-address=192.168.5.11 --allow-privileged=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audi
Main PID: 10656 (code=exited, status=1/FAILURE)Feb 22 09:28:07 master-1 kube-apiserver[10656]: --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximu
Feb 22 09:28:07 master-1 kube-apiserver[10656]: --log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
Feb 22 09:28:07 master-1 kube-apiserver[10656]: --logtostderr log to standard error instead of files (default true)
Feb 22 09:28:07 master-1 kube-apiserver[10656]: --skip-headers If true, avoid header prefixes in the log messages
Feb 22 09:28:07 master-1 kube-apiserver[10656]: --skip-log-headers If true, avoid headers when opening log files
Feb 22 09:28:07 master-1 systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
Feb 22 09:28:07 master-1 kube-apiserver[10656]: --stderrthreshold severity logs at or above this threshold go to stderr (default 2)
Feb 22 09:28:07 master-1 kube-apiserver[10656]: -v, --v Level number for the log level verbosity (default 0)
Feb 22 09:28:07 master-1 kube-apiserver[10656]: --version version[=true] Print version information and quit
Feb 22 09:28:07 master-1 kube-apiserver[10656]: --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered loggingroot#master-1:~# kubectl get componentstatuses --kubeconfig admin.kubeconfig
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
root#master-1:~#apiserver_service file:
cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.crt \\
--enable-admission-plugins=NodeRestriction,ServiceAccount \\
--enable-swagger-ui=true \\
--enable-bootstrap-token-auth=true \\
--etcd-cafile=/var/lib/kubernetes/ca.crt \\
--etcd-certfile=/var/lib/kubernetes/etcd-server.crt \\
--etcd-keyfile=/var/lib/kubernetes/etcd-server.key \\
--etcd-servers=https://192.168.5.11:2379,https://192.168.5.12:2379 \\
--event-ttl=1h \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.crt \\
--kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt \\
--kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key \\
--kubelet-https=true \\
--runtime-config=api/all \\
--service-account-key-file=/var/lib/kubernetes/service-account.crt \\
--service-cluster-ip-range=10.96.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kube-apiserver.crt \\
--tls-private-key-file=/var/lib/kubernetes/kube-apiserver.key \\
--v=2
Restart=on-failure
RestartSec=5[Install]
WantedBy=multi-user.target
EOF

I have figured out the issue. The issue is with the parameter --runtime-config=api/all it is not set to any value (true/false).
Error:
Feb 25 05:49:40 master-1 kube-apiserver[1228]: I0225 05:49:40.192274 1228 server.go:639] Initializing cache sizes based on 0MB limit
Feb 25 05:49:40 master-1 kube-apiserver[1228]: I0225 05:49:40.192416 1228 server.go:150] Version: v1.17.0
Feb 25 05:49:40 master-1 kube-apiserver[1228]: Error: invalid value api/all=
Feb 25 05:49:40 master-1 kube-apiserver[1228]: Usage:
Feb 25 05:49:40 master-1 kube-apiserver[1228]: kube-apiserver [flags]
Once I have set it to true(--runtime-config=api/all) and restarted the service I was able to make the kube-apiserver running.
Results:
root#master-1:~# service kube-apiserver status
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2020-02-25 07:17:09 UTC; 2min 31s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 864 (kube-apiserver)
Tasks: 12 (limit: 2361)
CGroup: /system.slice/kube-apiserver.service
└─864 /usr/local/bin/kube-apiserver --advertise-address=192.168.5.11 --allow-privileged=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxs
Feb 25 07:17:09 master-1 systemd[1]: Started Kubernetes API Server.
Feb 25 07:17:17 master-1 systemd-journald[412]: Suppressed 1644 messages from kube-apiserver.service
Feb 25 07:17:17 master-1 kube-apiserver[864]: I0225 07:17:17.017139 864 controller.go:606] quota admission added evaluator for: serviceaccounts
root#master-1:~# kubectl get componentstatuses --kubeconfig admin.kubeconfig
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
root#master-1:~#

I was following Kelsey Hightower's tutorial to bootstrap my cluster; started facing this erro. The ExecStart command worked while running in terminal but failing in systemd; then got to know and I've removed single quote & worked like a charm.
## Earlier
--runtime-config='api/all=true'
## Correct
--runtime-config=api/all=true

Share systemctl status kube-apiserver -l command output, also check /var/log/messages file and post error here.

Related

Cannot Launch Prothetheus App for RPI, Error code 2

I am trying to run Prometheus' standalone app on an RPI4 8GB. I am following the instructions laid out here: https://pimylifeup.com/raspberry-pi-prometheus/
My prometheus.service file is this:
[Unit]
Description=Prometheus Server
Documentation=https://prometheus.io/docs/introduction/overview/
After=network-online.target
[Service]
User=pi
Restart=on-failure
ExecStart=/home/pi/prometheus/prometheus \
--config.file=/home/pi/prometheus/prometheus.yml \
--storage.tsdb.path=/home/pi/prometheus/data
[Install]
WantedBy=multi-user.target
But when I try to run the service I get the following error.
● prometheus.service - Prometheus Server
Loaded: loaded (/etc/systemd/system/prometheus.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2022-11-24 18:42:51 GMT; 2s ago
Docs: https://prometheus.io/docs/introduction/overview/
Process: 485265 ExecStart=/home/pi/prometheus/prometheus --config.file=/home/pi/prometheus/prometheus.yml --storage.tsdb.path=/home/pi/prometheus/data (code=exited, status=2)
Main PID: 485265 (code=exited, status=2)
CPU: 160ms
Nov 24 18:42:51 master2 systemd[1]: prometheus.service: Scheduled restart job, restart counter is at 5.
Nov 24 18:42:51 master2 systemd[1]: Stopped Prometheus Server.
Nov 24 18:42:51 master2 systemd[1]: prometheus.service: Start request repeated too quickly.
Nov 24 18:42:51 master2 systemd[1]: prometheus.service: Failed with result 'exit-code'.
Nov 24 18:42:51 master2 systemd[1]: Failed to start Prometheus Server.
What does Error Status 2 mean in this context? Is it a permission problem, or something else?

Kubernetes on worker node - kubelet.service not starting

I am trying to setup a new worker-node on CentOS-7.9 with following commands.
# setenforce 0
# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
# firewall disabled already.
# swapoff -a
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
# yum install kubeadm -y
# systemctl enable kubelet
# systemctl start kubelet
But kubelet service status shows below error.
# systemctl status kubelet.service
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Wed 2020-12-02 16:49:22 IST; 3s ago
Docs: https://kubernetes.io/docs/
Process: 4442 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 4442 (code=exited, status=255)
Dec 02 16:49:22 k8s-node-01 systemd[1]: Unit kubelet.service entered failed state.
Dec 02 16:49:22 k8s-node-01 kubelet[4442]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/...81 +0x4f
Dec 02 16:49:22 k8s-node-01 kubelet[4442]: created by k8s.io/kubernetes/vendor/k8s.io/component-base/logs.InitLogs
Dec 02 16:49:22 k8s-node-01 kubelet[4442]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/...58 +0x8a
Dec 02 16:49:22 k8s-node-01 kubelet[4442]: goroutine 47 [select]:
Dec 02 16:49:22 k8s-node-01 kubelet[4442]: k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.(*worker).start(0xc000050be0)
Dec 02 16:49:22 k8s-node-01 kubelet[4442]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/...4 +0x105
Dec 02 16:49:22 k8s-node-01 kubelet[4442]: created by k8s.io/kubernetes/vendor/go.opencensus.io/stats/view.init.0
Dec 02 16:49:22 k8s-node-01 kubelet[4442]: /workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/...32 +0x57
Dec 02 16:49:22 k8s-node-01 systemd[1]: kubelet.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
I have below Kubernetes & docker version installed.
# kubelet --version
Kubernetes v1.19.4
# docker --version
Docker version 19.03.14, build 5eb3275d40
Also tried to join but even this fails.
# kubeadm join 65.66.67.68:6443 --token tu7qvt.1rfzhnxevg8m792h --discovery-token-ca-cert-hash sha256:48109668a4eadfs3c0c13a04d24a99bd82ff2eredefab6be6b78aadeead358074ee
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 65.66.67.55:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 65.66.67.55:10248: connect: connection refused.
error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher
With -v=9 option:-
# kubeadm join 65.66.67.68:6443 --token tu7qvt.1rfzhnxevg8m792h --discovery-token-ca-cert-hash sha256:48109668a4eadfs3c0c13a04d24a99bd82ff2eredefab6be6b78aadeead358074ee -v=9
I1203 10:25:29.374052 11716 round_trippers.go:423] curl -k -v -XGET -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.19.4 (linux/amd64) kubernetes/d360454" 'https://65.66.67.68:6443/api/v1/nodes/k8s-node-01?timeout=10s'
I1203 10:25:29.376358 11716 round_trippers.go:443] GET https://65.66.67.68:6443/api/v1/nodes/k8s-node-01?timeout=10s 404 Not Found in 2 milliseconds
I1203 10:25:29.376406 11716 round_trippers.go:449] Response Headers:
I1203 10:25:29.376411 11716 round_trippers.go:452] Content-Type: application/json
I1203 10:25:29.376415 11716 round_trippers.go:452] Content-Length: 192
I1203 10:25:29.376419 11716 round_trippers.go:452] Date: Thu, 03 Dec 2020 04:55:29 GMT
I1203 10:25:29.376423 11716 round_trippers.go:452] Cache-Control: no-cache, private
I1203 10:25:29.376443 11716 request.go:1097] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"k8s-node-01\" not found","reason":"NotFound","details":{"name":"k8s-node-01","kind":"nodes"},"code":404}
timed out waiting for the condition
error uploading crisocket
What could be the wrong in installation? Any direction would be helpful.
Node has joined the cluster after commenting the entries from /etc/resolv.conf file then once node has joined to the cluster successfully again Un-commented. Now on my master all the namespaces and nodes are running fine.

Kubeadm join failing and hangs at Setting node annotation to enable volume controller attach/detach

I'm having issues getting a node to join my k8s cluster. It was in the cluster before, but now refuses to join. Nothing on the servers themselves has changed, besides leaving and re-joining the cluster.
Kubernetes version: 1.14.9beta (Compiled from source for kubespawner compatibility)
OS version(s): Manjaro Openbox (All systems are on the same patch levels)
Below is the journalctl log from a join attempt:
Nov 18 14:56:06 dragon-den kubelet[17701]: Flag --address has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-con>Nov 18 14:56:06 dragon-den kubelet[17701]: Flag --allow-privileged has been deprecated, will be removed in a future version
Nov 18 14:56:06 dragon-den kubelet[17701]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubel>Nov 18 14:56:07 dragon-den kubelet[17701]: I1118 14:56:07.856840 17701 server.go:418] Version: v1.14.9-beta.0.44+500f5aba80d712
Nov 18 14:56:07 dragon-den kubelet[17701]: I1118 14:56:07.857493 17701 plugins.go:103] No cloud provider specified.
Nov 18 14:56:07 dragon-den kubelet[17701]: W1118 14:56:07.857566 17701 server.go:557] standalone mode, no API client
Nov 18 14:56:08 dragon-den kubelet[17701]: W1118 14:56:08.026479 17701 server.go:475] No api server defined - no events will be sent to API server.
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.026530 17701 server.go:629] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.027259 17701 container_manager_linux.go:261] container manager verified user specified cgroup-root exists: []
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.027322 17701 container_manager_linux.go:266] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime>Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.027617 17701 container_manager_linux.go:286] Creating device plugin manager: true
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.027819 17701 state_mem.go:36] [cpumanager] initializing new in-memory state store
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.036358 17701 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.036403 17701 client.go:104] Start docker client with request timeout=2m0s
Nov 18 14:56:08 dragon-den kubelet[17701]: W1118 14:56:08.063851 17701 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.063918 17701 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Nov 18 14:56:08 dragon-den kubelet[17701]: W1118 14:56:08.064142 17701 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 18 14:56:08 dragon-den kubelet[17701]: W1118 14:56:08.068056 17701 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup.
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.070412 17701 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.086972 17701 docker_service.go:258] Docker Info: &{ID:UZIS:BXME:6KSB:34PI:HKEZ:HRT6:I4XW:44VI:AR5M:Q4P7:EGFA:KRDD Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersS>Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.087132 17701 docker_service.go:271] Setting cgroupDriver to systemd
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.099887 17701 remote_runtime.go:62] parsed scheme: ""
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.099950 17701 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100029 17701 remote_image.go:50] parsed scheme: ""
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100049 17701 remote_image.go:50] scheme "" not registered, fallback to default scheme
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100315 17701 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/var/run/dockershim.sock 0 <nil>}]
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100356 17701 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{/var/run/dockershim.sock 0 <nil>}]
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100381 17701 clientconn.go:796] ClientConn switching balancer to "pick_first"
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100416 17701 clientconn.go:796] ClientConn switching balancer to "pick_first"
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100519 17701 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0000cbaa0, CONNECTING
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100575 17701 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000349b20, CONNECTING
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.100959 17701 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc000349b20, READY
Nov 18 14:56:08 dragon-den kubelet[17701]: I1118 14:56:08.101021 17701 balancer_conn_wrappers.go:131] pickfirstBalancer: HandleSubConnStateChange: 0xc0000cbaa0, READY
Nov 18 14:56:28 dragon-den kubelet[17701]: E1118 14:56:28.454199 17701 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
Nov 18 14:56:28 dragon-den kubelet[17701]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.488935 17701 kuberuntime_manager.go:210] Container runtime docker initialized, version: 19.03.4-ce, apiVersion: 1.40.0
Nov 18 14:56:28 dragon-den kubelet[17701]: W1118 14:56:28.489074 17701 volume_host.go:77] kubeClient is nil. Skip initialization of CSIDriverLister
Nov 18 14:56:28 dragon-den kubelet[17701]: W1118 14:56:28.489559 17701 csi_plugin.go:215] kubernetes.io/csi: kubeclient not set, assuming standalone kubelet
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.509315 17701 server.go:1055] Started kubelet
Nov 18 14:56:28 dragon-den kubelet[17701]: W1118 14:56:28.509368 17701 kubelet.go:1387] No api server defined - no node status update will be sent.
Nov 18 14:56:28 dragon-den kubelet[17701]: E1118 14:56:28.509369 17701 kubelet.go:1282] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory ca>Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.509441 17701 server.go:141] Starting to listen on 0.0.0.0:10250
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510125 17701 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510164 17701 status_manager.go:148] Kubernetes client is nil, not starting status manager.
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510176 17701 kubelet.go:1808] Starting kubelet main sync loop.
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510201 17701 volume_manager.go:248] Starting Kubelet Volume Manager
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510197 17701 kubelet.go:1825] skipping pod synchronization - [container runtime status check may not have completed yet., PLEG is not healthy: pleg has yet to be successful.] Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510272 17701 desired_state_of_world_populator.go:130] Desired state populator starts to run
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.510546 17701 server.go:343] Adding debug handlers to kubelet server.
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.610328 17701 kubelet.go:1825] skipping pod synchronization - container runtime status check may not have completed yet.
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.667490 17701 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.708302 17701 cpu_manager.go:155] [cpumanager] starting with none policy
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.708327 17701 cpu_manager.go:156] [cpumanager] reconciling every 10s
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.708362 17701 policy_none.go:42] [cpumanager] none policy: Start
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.710493 17701 reconciler.go:154] Reconciler: start to sync state
Nov 18 14:56:28 dragon-den kubelet[17701]: W1118 14:56:28.732840 17701 container.go:422] Failed to get RecentStats("/libcontainer_17701_systemd_test_default.slice") while determining the next housekeeping: unable to find data in memory>Nov 18 14:56:28 dragon-den kubelet[17701]: W1118 14:56:28.783045 17701 manager.go:540] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
Nov 18 14:56:28 dragon-den kubelet[17701]: I1118 14:56:28.783441 17701 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
I think my issue is related to server.go:475] No api server defined - no events will be sent to API server.
or cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d but I'm not 100% sure.
Node config:
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=127.0.0.1"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
#KUBELET_HOSTNAME="--hostname-override=127.0.0.1"
# location of the api-server
#KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"
# Add your own!
KUBELET_ARGS=""
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"
Server config:
###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
# KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname
#KUBELET_HOSTNAME="--hostname-override=127.0.0.1"
# location of the api-server
#KUBELET_API_SERVER="--api-servers=http://127.0.0.1:8080"
# Add your own!
KUBELET_ARGS="--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf \
--kubeconfig=/etc/kubernetes/kubelet.conf \
--config=/var/lib/kubelet/config.yaml \
--network-plugin=cni \
--pod-infra-container-image=k8s.gcr.io/pause:3.1
--cgroup-driver=systemd"
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
#KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"
I'm also using Calico for the networking, if that makes any difference
This is a sign that you have some things left behind from when this node was part of the cluster. This is not mandatory in most cases but I highly recommend running some commands besides kubeadm reset to cleanup your node after removing it from a cluster.
sudo kubeadm reset -f
sudo systemctl stop kubelet
sudo systemctl stop docker
sudo rm -rf /var/lib/cni/
sudo rm -rf /var/lib/kubelet/*
sudo rm -rf /var/lib/etcd/*
sudo rm -rf /etc/cni/
sudo ifconfig cni0 down
sudo ifconfig flannel.1 down
sudo ifconfig docker0 down
sudo ip link delete cni0
sudo ip link delete flannel.1
sudo systemctl start docker
Some of these commands may fail depending on your implementation.
I found this script created for this purpose. It's a bit old but you can take some things from that.
Here is a similar case to yours Reset Kuberenetes Cluster.

kubeadm join failure, error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition

When join node :
sudo kubeadm join 172.16.7.101:6443 --token 4mya3g.duoa5xxuxin0l6j3 --discovery-token-ca-cert-hash sha256:bba76ac7a207923e8cae0c466dac166500a8e0db43fb15ad9018b615bdbabeb2
The outputs:
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition
And systemctl status kubelet:
node#node:~$ sudo systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Wed 2019-04-17 06:20:56 UTC; 12min ago
Docs: https://kubernetes.io/docs/home/
Main PID: 26716 (kubelet)
Tasks: 16 (limit: 1111)
CGroup: /system.slice/kubelet.service
└─26716 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml -
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.022384 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.073969 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Unauthorized
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.122820 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.228838 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.273153 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthorized
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.330578 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.431114 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.473501 26716 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.531294 26716 kubelet.go:2244] node "node" not found
Apr 17 06:33:38 node kubelet[26716]: E0417 06:33:38.632347 26716 kubelet.go:2244] node "node" not found
To Unauthorized I checked at master with kubeadm token list, token is valid.
So what's the problem? Thanks a lot.
Please verify pre and post installation steps here:
Please verify also the status of your services enabled and running, docker env.
sudo systemctl enable docker
sudo systemctl enable kubelet
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet
Are the results the same if you run init command with --ignore-preflight-errors=all
For more details please use also "journalctl -u kubelet"
Having more details from your logs, please take a look at "github - kubeadm/issues" here:
Please provide more details about you env in order to recreate this issue and share with your additional findings.
Could you please perform another test and run kubeadm init on your worker node, in the same way as on the first node (in short please create second master node) just to verify your working env.

HAproxy process has been killed unexpectlly when HAproxy service wss reloaded by sysv init script in CentOS7.2

I installed the HAproxy(1.5.14-3.el7) from CentOS7.2 repository.
When I reload HAproxy service with a wrong haproxy.cfg,
the return code of reload is incorrect.
About HAproxy,OS,systemd information pls see below:
[root#unknown ~]# rpm -qa | egrep haproxy
haproxy-1.5.14-3.el7.x86_64
[root#unknown ~]#
[root#unknown ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root#unknown ~]#
[root#unknown ~]# rpm -qa | egrep systemd
systemd-libs-219-19.el7.x86_64
systemd-219-19.el7.x86_64
systemd-sysv-219-19.el7.x86_64
[root#unknown ~]#
The return code of reload is incorrect.
[root#unknown ~]# service haproxy status
Redirecting to /bin/systemctl status haproxy.service
●haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled; vendor preset: disabled)
Active: active (running) since Tue 2016-06-07 11:24:41 UTC; 4s ago
[root#unknown ~]#
[root#unknown ~]#
[root#unknown ~]# more /etc/haproxy/haproxy.cfg
XXXX **--> I added an incorrect keyword into haproxy.cfg**
Global
....
[root#unknown ~]#
[root#unknown ~]#
[root#unknown ~]# service haproxy reload
Redirecting to /bin/systemctl reload haproxy.service
[root#unknown ~]#
[root#unknown ~]# echo $?
0 **--> It was sucessful !!!**
[root#unknown ~]#
[root#unknown ~]# service haproxy status
Redirecting to /bin/systemctl status haproxy.service
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled; vendor preset: disabled)
Active: active (running) since Tue 2016-06-07 11:24:41 UTC; 21s ago
Process: 16507 ExecReload=/bin/kill -USR2 $MAINPID (code=exited, status=0/SUCCESS)
Main PID: 16464 (haproxy-systemd)
CGroup: /system.slice/haproxy.service
tq16464 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
tq16465 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
mq16466 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
Jun 07 11:24:41 unknown systemd[1]: Started HAProxy Load Balancer.
Jun 07 11:24:41 unknown systemd[1]: Starting HAProxy Load Balancer...
Jun 07 11:24:41 unknown haproxy-systemd-wrapper[16464]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/haproxy/haproxy...id -Ds
Jun 07 11:24:57 unknown systemd[1]: Reloaded HAProxy Load Balancer.
Jun 07 11:24:57 unknown haproxy-systemd-wrapper[16464]: haproxy-systemd-wrapper: re-executing
Jun 07 11:24:57 unknown haproxy-systemd-wrapper[16464]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/haproxy/haproxy... 16466
Jun 07 11:24:57 unknown haproxy-systemd-wrapper[16464]: [ALERT] 158/112457 (16508) : parsing [/etc/haproxy/haproxy.cfg:9]: unknown k...ction. **--> In fact, it was wrong**
Jun 07 11:24:57 unknown haproxy-systemd-wrapper[16464]: [ALERT] 158/112457 (16508) : Error(s) found in configuration file : /etc/hap...xy.cfg
Jun 07 11:24:57 unknown haproxy-systemd-wrapper[16464]: [ALERT] 158/112457 (16508) : Fatal errors found in configuration.
So I descided to use the sysv init.d script to start/reload/stop HAproxy service.
sysv init.d script:
[root#unknown ~]# cat /etc/init.d/haproxy
#!/bin/sh
#
# chkconfig: - 85 15
# description: HA-Proxy is a TCP/HTTP reverse proxy which is particularly suited \
# for high availability environments.
# processname: haproxy
# config: /etc/haproxy/haproxy.cfg
# pidfile: /var/run/haproxy.pid
# Script Author: Simon Matter <simon.matter#invoca.ch>
# Version: 2004060600
### BEGIN INIT INFO
# Provides: HA-Proxy
# Required-Start: $network $syslog sshd
# Required-Stop:
# Default-Start: 3 4 5
# Default-Stop: 0 1 2 6
# Short-Description: HAProxy
### END INIT INFO
# Source function library.
if [ -f /etc/init.d/functions ]; then
. /etc/init.d/functions
elif [ -f /etc/rc.d/init.d/functions ] ; then
. /etc/rc.d/init.d/functions
else
exit 0
fi
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
#[ ${NETWORKING} = "no" ] && exit 0
# This is our service name
BASENAME=`basename $0`
if [ -L $0 ]; then
BASENAME=`find $0 -name $BASENAME -printf %l`
BASENAME=`basename $BASENAME`
fi
[ -f /etc/$BASENAME/$BASENAME.cfg ] || exit 1
RETVAL=0
start() {
/usr/sbin/$BASENAME -c -q -f /etc/$BASENAME/$BASENAME.cfg
if [ $? -ne 0 ]; then
echo "Errors found in configuration file, check it with '$BASENAME check'."
return 1
fi
echo -n "Starting $BASENAME: "
daemon /usr/sbin/$BASENAME -D -f /etc/$BASENAME/$BASENAME.cfg -p /var/run/$BASENAME.pid
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/$BASENAME
return $RETVAL
}
stop() {
killproc $BASENAME -USR1
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$BASENAME
[ $RETVAL -eq 0 ] && rm -f /var/run/$BASENAME.pid
return $RETVAL
}
restart() {
/usr/sbin/$BASENAME -c -q -f /etc/$BASENAME/$BASENAME.cfg
if [ $? -ne 0 ]; then
echo "Errors found in configuration file, check it with '$BASENAME check'."
return 1
fi
stop
start
}
reload() {
/usr/sbin/$BASENAME -c -q -f /etc/$BASENAME/$BASENAME.cfg
if [ $? -ne 0 ]; then
echo "Errors found in configuration file, check it with '$BASENAME check'."
return 1
fi
/usr/sbin/$BASENAME -D -f /etc/$BASENAME/$BASENAME.cfg -p /var/run/$BASENAME.pid -sf $(cat /var/run/$BASENAME.pid)
}
check() {
/usr/sbin/$BASENAME -c -q -V -f /etc/$BASENAME/$BASENAME.cfg
}
rhstatus() {
status $BASENAME
}
condrestart() {
[ -e /var/lock/subsys/$BASENAME ] && restart || :
}
# See how we were called.
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
reload)
reload
;;
condrestart)
condrestart
;;
status)
rhstatus
;;
check)
check
;;
*)
echo $"Usage: $BASENAME {start|stop|restart|reload|condrestart|status|check}"
exit 1
esac
exit $?
When I reloaded HAproxy serivce with the correct haproxy.cfg,
the command(service haproxy reload) returned 0,
but HAproxy's status became failed.
[root#unknown ~]# service haproxy status
● haproxy.service - LSB: HAProxy
Loaded: loaded (/etc/rc.d/init.d/haproxy)
Active: active (running) since Tue 2016-06-07 11:33:22 UTC; 1h 14min ago
Docs: man:systemd-sysv-generator(8)
Process: 16636 ExecStart=/etc/rc.d/init.d/haproxy start (code=exited, status=0/SUCCESS)
Main PID: 16641 (haproxy)
CGroup: /system.slice/haproxy.service
mq16641 /usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid
Jun 07 11:33:22 unknown systemd[1]: Starting LSB: HAProxy...
Jun 07 11:33:22 unknown haproxy[16636]: Starting haproxy: [ OK ]
Jun 07 11:33:22 unknown systemd[1]: Started LSB: HAProxy.
[root#unknown ~]#
[root#unknown ~]# service haproxy reload
Reloading haproxy configuration (via systemctl): [ OK ]
[root#unknown ~]# echo $?
0 **--> It was sucessful !!!**
[root#unknown ~]#
[root#unknown ~]# service haproxy status
● haproxy.service - LSB: HAProxy
Loaded: loaded (/etc/rc.d/init.d/haproxy)
Active: failed (Result: signal) since Tue 2016-06-07 12:48:01 UTC; 1s ago
Docs: man:systemd-sysv-generator(8)
Process: 16869 ExecStop=/etc/rc.d/init.d/haproxy stop (code=exited, status=0/SUCCESS)
Process: 16863 ExecReload=/etc/rc.d/init.d/haproxy reload (code=exited, status=0/SUCCESS)
Process: 16636 ExecStart=/etc/rc.d/init.d/haproxy start (code=exited, status=0/SUCCESS)
Main PID: 16868 (code=killed, signal=KILL)
Jun 07 11:33:22 unknown systemd[1]: Starting LSB: HAProxy...
Jun 07 11:33:22 unknown haproxy[16636]: Starting haproxy: [ OK ]
Jun 07 11:33:22 unknown systemd[1]: Started LSB: HAProxy.
Jun 07 12:48:00 unknown systemd[1]: Reloaded LSB: HAProxy.
Jun 07 12:48:00 unknown systemd[1]: haproxy.service: main process exited, code=killed, status=9/KILL **--> It was killed ,but I don't know which process killed it, Cgroup ?**
Jun 07 12:48:01 unknown haproxy[16869]: [FAILED]
Jun 07 12:48:01 unknown systemd[1]: Unit haproxy.service entered failed state.
Jun 07 12:48:01 unknown systemd[1]: haproxy.service failed.
[root#unknown ~]#
I used a newer systemd to get the detailed logs
Jun 07 13:02:59 elb systemd[1]: Starting LSB: HAProxy...
Jun 07 13:02:59 elb systemd[7010]: Executing: /etc/rc.d/init.d/haproxy start
Jun 07 13:02:59 elb haproxy[7010]: Starting haproxy: [ OK ]
Jun 07 13:02:59 elb systemd[1]: Child 7010 belongs to haproxy.service
Jun 07 13:02:59 elb systemd[1]: haproxy.service: control process exited, code=exited status=0
Jun 07 13:02:59 elb systemd[1]: haproxy.service got final SIGCHLD for state start
Jun 07 13:02:59 elb systemd[1]: Main PID loaded: 7015
Jun 07 13:02:59 elb systemd[1]: haproxy.service changed start -> running
Jun 07 13:02:59 elb systemd[1]: Job haproxy.service/start finished, result=done
Jun 07 13:02:59 elb systemd[1]: Started LSB: HAProxy. **--> start HAproxy successfully **
Jun 07 13:03:27 elb systemd[1]: Trying to enqueue job haproxy.service/reload/replace
Jun 07 13:03:27 elb systemd[1]: Installed new job haproxy.service/reload as 9504
Jun 07 13:03:27 elb systemd[1]: Enqueued job haproxy.service/reload as 9504
Jun 07 13:03:27 elb systemd[1]: About to execute: /etc/rc.d/init.d/haproxy reload
Jun 07 13:03:27 elb systemd[1]: Forked /etc/rc.d/init.d/haproxy as 7060
Jun 07 13:03:27 elb systemd[1]: haproxy.service changed running -> reload
Jun 07 13:03:27 elb systemd[7060]: Executing: /etc/rc.d/init.d/haproxy reload
Jun 07 13:03:27 elb systemd[1]: Child 7015 belongs to haproxy.service
Jun 07 13:03:27 elb systemd[1]: Main PID changing: 7015 -> 7065
Jun 07 13:03:27 elb systemd[1]: Child 7060 belongs to haproxy.service
Jun 07 13:03:27 elb systemd[1]: haproxy.service: control process exited, code=exited status=0
Jun 07 13:03:27 elb systemd[1]: haproxy.service got final SIGCHLD for state reload
Jun 07 13:03:27 elb systemd[1]: haproxy.service changed reload -> running
Jun 07 13:03:27 elb systemd[1]: Job haproxy.service/reload finished, result=done
Jun 07 13:03:27 elb systemd[1]: Reloaded LSB: HAProxy. **--> successful to reload HAproxy**
Jun 07 13:03:27 elb systemd[1]: Child 7065 belongs to haproxy.service
Jun 07 13:03:27 elb systemd[1]: haproxy.service: main process exited, code=killed, status=9/KILL **--> process 7065 has been killed unexpectly**
Jun 07 13:03:27 elb systemd[1]: haproxy.service changed running -> failed
Jun 07 13:03:27 elb systemd[1]: Unit haproxy.service entered failed state.
Jun 07 13:03:27 elb systemd[1]: haproxy.service failed.
Jun 07 13:03:27 elb systemd[1]: haproxy.service: cgroup is empty **-->Did cgroup killed process 7065? Is it a bug of systemd? **
In CentOS7.1, I use the sysv init script (pls see above) to reload haproxy ,and  'service haproxy reload' command could return correct result.
I don't know what is wrong in CentOS7.2. I just want to get following results of reloading HAproxy.
When haproxy.cfg file is incorrect, 'service haproxy reload' command returns 1
When haproxy.cfg file is correct, 'service haproxy reload' command returns 0
Can anyone help me ? Thanks
I would guess this is a SELinux issue. you can try change for below:
:
vi /etc/selinux/config
SELINUX=enforcing
SELINUXTYPE=targeted
SELINUX=disabled
:wq! #save and quit