[root#server ~]# systemctl status postgresql-14.service
● postgresql-14.service - PostgreSQL 14 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-14.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/postgresql-14.service.d
└─override.conf
Active: failed (Result: exit-code) since Tue 2022-07-12 03:58:25 UTC; 4min 8s ago
Docs: https://www.postgresql.org/docs/14/static/
Process: 4707 ExecStart=/usr/pgsql-14/bin/postmaster -D ${PGDATA} (code=exited, status=1/FAILURE)
Process: 4702 ExecStartPre=/usr/pgsql-14/bin/postgresql-14-check-db-dir ${PGDATA} (code=exited, status=0/SUCCESS)
Main PID: 4707 (code=exited, status=1/FAILURE)
Jul 12 03:58:25 server systemd[1]: Starting PostgreSQL 14 database server...
Jul 12 03:58:25 server systemd[1]: postgresql-14.service: Main process exited, code=exited, status=1/FAILURE
Jul 12 03:58:25 server systemd[1]: postgresql-14.service: Failed with result 'exit-code'.
Jul 12 03:58:25 server systemd[1]: Failed to start PostgreSQL 14 database server.
Related
I am trying to run Prometheus' standalone app on an RPI4 8GB. I am following the instructions laid out here: https://pimylifeup.com/raspberry-pi-prometheus/
My prometheus.service file is this:
[Unit]
Description=Prometheus Server
Documentation=https://prometheus.io/docs/introduction/overview/
After=network-online.target
[Service]
User=pi
Restart=on-failure
ExecStart=/home/pi/prometheus/prometheus \
--config.file=/home/pi/prometheus/prometheus.yml \
--storage.tsdb.path=/home/pi/prometheus/data
[Install]
WantedBy=multi-user.target
But when I try to run the service I get the following error.
● prometheus.service - Prometheus Server
Loaded: loaded (/etc/systemd/system/prometheus.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2022-11-24 18:42:51 GMT; 2s ago
Docs: https://prometheus.io/docs/introduction/overview/
Process: 485265 ExecStart=/home/pi/prometheus/prometheus --config.file=/home/pi/prometheus/prometheus.yml --storage.tsdb.path=/home/pi/prometheus/data (code=exited, status=2)
Main PID: 485265 (code=exited, status=2)
CPU: 160ms
Nov 24 18:42:51 master2 systemd[1]: prometheus.service: Scheduled restart job, restart counter is at 5.
Nov 24 18:42:51 master2 systemd[1]: Stopped Prometheus Server.
Nov 24 18:42:51 master2 systemd[1]: prometheus.service: Start request repeated too quickly.
Nov 24 18:42:51 master2 systemd[1]: prometheus.service: Failed with result 'exit-code'.
Nov 24 18:42:51 master2 systemd[1]: Failed to start Prometheus Server.
What does Error Status 2 mean in this context? Is it a permission problem, or something else?
I try to join worker node to k8s kluser.
sudo kubeadm join 10.2.67.201:6443 --token x --discovery-token-ca-cert-hash sha2566 x
But i get error on this stage:
curl -sSL http://localhost:10248/healthz'
failed with error: Get http://localhost:10248/healthz: dial tcp
Error:
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
I see that kubelet service is down:
journalctl -xeu kubelet
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished shutting down.
Nov 22 15:49:00 s001as-ceph-node-03 systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Nov 22 15:49:00 s001as-ceph-node-03 kubelet[286703]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag.
Nov 22 15:49:00 s001as-ceph-node-03 kubelet[286703]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag.
Nov 22 15:49:00 s001as-ceph-node-03 kubelet[286703]: F1122 15:49:00.224350 286703 server.go:251] unable to load client CA file /etc/kubernetes/ssl/ca.crt: open /etc/kubernetes/ssl/ca.cr
Nov 22 15:49:00 s001as-ceph-node-03 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Nov 22 15:49:00 s001as-ceph-node-03 systemd[1]: Unit kubelet.service entered failed state.
Nov 22 15:49:00 s001as-ceph-node-03 systemd[1]: kubelet.service failed.
Nov 22 15:49:10 s001as-ceph-node-03 systemd[1]: kubelet.service holdoff time over, scheduling restart.
Nov 22 15:49:10 s001as-ceph-node-03 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished shutting down.
Nov 22 15:49:10 s001as-ceph-node-03 systemd[1]: Started kubelet: The Kubernetes Node Agent.
-- Subject: Unit kubelet.service has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit kubelet.service has finished starting up.
--
-- The start-up result is done.
Nov 22 15:49:10 s001as-ceph-node-03 kubelet[286717]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag.
Nov 22 15:49:10 s001as-ceph-node-03 kubelet[286717]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag.
Nov 22 15:49:10 s001as-ceph-node-03 kubelet[286717]: F1122 15:49:10.476478 286717 server.go:251] unable to load client CA file /etc/kubernetes/ssl/ca.crt: open /etc/kubernetes/ssl/ca.cr
Nov 22 15:49:10 s001as-ceph-node-03 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Nov 22 15:49:10 s001as-ceph-node-03 systemd[1]: Unit kubelet.service entered failed state.
Nov 22 15:49:10 s001as-ceph-node-03 systemd[1]: kubelet.service failed.
I fixed it.
Just copy /etc/kubernetes/pki/ca.crt into /etc/kubernetes/ssl/ca.crt
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I created master node on GCE using this commands:
gcloud compute instances create master --machine-type g1-small --zone europe-west1-d
gcloud compute addresses create myexternalip --region europe-west1
gcloud compute target-pools create kubernetes --region europe-west1
gcloud compute target-pools add-instances kubernetes --instances master --instances-zone europe-west1-d
gcloud compute forwarding-rules create kubernetes-forward --address myexternalip --region europe-west1 --ports 1-65535 --target-pool kubernetes
gcloud compute forwarding-rules describe kubernetes-forward
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
and opened all firewalls.
After it i created aws ec2 instance, opened firewalls and using:
kubeadm join --token 55d287.b540e254a280f853 ip:6443 --discovery-token-unsafe-skip-ca-verification
to connecting instance to cluster.
But in the master node it is not displayed
Docker version: 17.12
Kubernetes version: 1.9.3
UPD:
Output from node on aws ec2
systemctl status kubelet.service:
kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Sat 2018-02-24 20:23:53 UTC; 23s ago
Docs: http://kubernetes.io/docs/
Main PID: 30678 (kubelet)
Tasks: 5
Memory: 13.4M
CPU: 125ms
CGroup: /system.slice/kubelet.service
└─30678 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests -
Feb 24 20:23:53 ip-172-31-0-250 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Feb 24 20:23:53 ip-172-31-0-250 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Feb 24 20:23:53 ip-172-31-0-250 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: I0224 20:23:53.420375 30678 feature_gate.go:226] feature gates: &{{} map[]}
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: I0224 20:23:53.420764 30678 controller.go:114] kubelet config controller: starting controller
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: I0224 20:23:53.420944 30678 controller.go:118] kubelet config controller: validating combination of defaults and flags
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: W0224 20:23:53.425410 30678 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: I0224 20:23:53.444969 30678 server.go:182] Version: v1.9.3
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: I0224 20:23:53.445274 30678 feature_gate.go:226] feature gates: &{{} map[]}
Feb 24 20:23:53 ip-172-31-0-250 kubelet[30678]: I0224 20:23:53.445565 30678 plugins.go:101] No cloud provider specified.
journalctl -u kubelet:
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 24 20:15:12 ip-172-31-0-250 kubelet[30243]: I0224 20:15:12.819249 30243 feature_gate.go:226] feature gates: &{{} map[]}
Feb 24 20:15:12 ip-172-31-0-250 kubelet[30243]: I0224 20:15:12.821054 30243 controller.go:114] kubelet config controller: starting controller
Feb 24 20:15:12 ip-172-31-0-250 kubelet[30243]: I0224 20:15:12.821243 30243 controller.go:118] kubelet config controller: validating combination of defaults and flags
Feb 24 20:15:12 ip-172-31-0-250 kubelet[30243]: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: kubelet.service: Unit entered failed state.
Feb 24 20:15:12 ip-172-31-0-250 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 24 20:15:23 ip-172-31-0-250 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Feb 24 20:15:23 ip-172-31-0-250 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Feb 24 20:15:23 ip-172-31-0-250 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 24 20:15:23 ip-172-31-0-250 kubelet[30304]: I0224 20:15:23.186834 30304 feature_gate.go:226] feature gates: &{{} map[]}
Feb 24 20:15:23 ip-172-31-0-250 kubelet[30304]: I0224 20:15:23.187255 30304 controller.go:114] kubelet config controller: starting controller
Feb 24 20:15:23 ip-172-31-0-250 kubelet[30304]: I0224 20:15:23.187451 30304 controller.go:118] kubelet config controller: validating combination of defaults and flags
Feb 24 20:15:23 ip-172-31-0-250 kubelet[30304]: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Feb 24 20:15:23 ip-172-31-0-250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 20:15:23 ip-172-31-0-250 systemd[1]: kubelet.service: Unit entered failed state.
Feb 24 20:15:23 ip-172-31-0-250 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 24 20:15:33 ip-172-31-0-250 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Feb 24 20:15:33 ip-172-31-0-250 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Feb 24 20:15:33 ip-172-31-0-250 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 24 20:15:33 ip-172-31-0-250 kubelet[30311]: I0224 20:15:33.422948 30311 feature_gate.go:226] feature gates: &{{} map[]}
Feb 24 20:15:33 ip-172-31-0-250 kubelet[30311]: I0224 20:15:33.423349 30311 controller.go:114] kubelet config controller: starting controller
Feb 24 20:15:33 ip-172-31-0-250 kubelet[30311]: I0224 20:15:33.423525 30311 controller.go:118] kubelet config controller: validating combination of defaults and flags
Feb 24 20:15:33 ip-172-31-0-250 kubelet[30311]: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Feb 24 20:15:33 ip-172-31-0-250 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 24 20:15:33 ip-172-31-0-250 systemd[1]: kubelet.service: Unit entered failed state.
Feb 24 20:15:33 ip-172-31-0-250 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 24 20:15:43 ip-172-31-0-250 systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Feb 24 20:15:43 ip-172-31-0-250 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Feb 24 20:15:43 ip-172-31-0-250 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Feb 24 20:15:43 ip-172-31-0-250 kubelet[30319]: I0224 20:15:43.671742 30319 feature_gate.go:226] feature gates: &{{} map[]}
Feb 24 20:15:43 ip-172-31-0-250 kubelet[30319]: I0224 20:15:43.672195 30319 controller.go:114] kubelet config controller: starting controller
UPD:
Error is on aws ec2 instance side, but i can't find what is wrong.
PROBLEM SOLVED
Should initialize kubeadm with this flag --apiserver-advertise-address
After you create load balancer you need to type this command for show you external load balancer ip address
gcloud compute forwarding-rules describe kubernetes-forward
And the initialize cluster with this flag
--apiserver-advertise-address=external_load_balancer_ip
So your kubeadm command looks like this
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=external_load_balancer_ip
the service is not starting and the listener is not activated on port 8080.
here is my kubernetes configuration:
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://centos-master:8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://centos-master:2379"
systemctl status kube-apiserver -l
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Mon 2017-08-14 12:07:04 +0430; 29s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 2087 ExecStart=/usr/bin/kube-apiserver $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBE_ETCD_SERVERS $KUBE_API_ADDRESS $KUBE_API_PORT $KUBELET_PORT $KUBE_ALLOW_PRIV $KUBE_SERVICE_ADDRESSES $KUBE_ADMISSION_CONTROL $KUBE_API_ARGS (code=exited, status=2)
Main PID: 2087 (code=exited, status=2)
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service: main process exited, code=exited, status=2/INVALIDARGUMENT
Aug 14 12:07:04 centos-master systemd[1]: Failed to start Kubernetes API Server.
Aug 14 12:07:04 centos-master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service failed.
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service holdoff time over, scheduling restart.
Aug 14 12:07:04 centos-master systemd[1]: start request repeated too quickly for kube-apiserver.service
Aug 14 12:07:04 centos-master systemd[1]: Failed to start Kubernetes API Server.
Aug 14 12:07:04 centos-master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 14 12:07:04 centos-master systemd[1]: kube-apiserver.service failed.
tail -n 1000 /var/log/messages
resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.240160 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:466: Failed to list *api.PersistentVolume: Get http://centos-master:8080/api/v1/persistentvolumes?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.242039 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:470: Failed to list *api.Service: Get http://centos-master:8080/api/v1/services?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.242924 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:457: Failed to list *api.Pod: Get http://centos-master:8080/api/v1/pods?fieldSelector=spec.nodeName%3D%2Cstatus.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.269386 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:473: Failed to list *api.ReplicationController: Get http://centos-master:8080/api/v1/replicationcontrollers?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.285782 606 reflector.go:199] k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:481: Failed to list *extensions.ReplicaSet: Get http://centos-master:8080/apis/extensions/v1beta1/replicasets?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
Aug 14 12:12:30 centos-master kube-scheduler: E0814 12:12:30.286529 606 reflector.go:199] pkg/controller/informers/factory.go:89: Failed to list *api.PersistentVolumeClaim: Get http://centos-master:8080/api/v1/persistentvolumeclaims?resourceVersion=0: dial tcp 10.0.2.4:8080: getsockopt: connection refused
systemd[1]: kube-apiserver.service: main process exited, code=exited, status=2/INVALIDARGUMENT
The arguments you're using do not seem valid.
Check the list of valid arguments here.
You can also follow the Kubernetes The Hard Way guide for a trusted way to run the API server.
I installed the HAproxy(1.5.14-3.el7) from CentOS7.2 repository.
When I reload HAproxy service with a wrong haproxy.cfg,
the return code of reload is incorrect.
About HAproxy,OS,systemd information pls see below:
[root#unknown ~]# rpm -qa | egrep haproxy
haproxy-1.5.14-3.el7.x86_64
[root#unknown ~]#
[root#unknown ~]# cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core)
[root#unknown ~]#
[root#unknown ~]# rpm -qa | egrep systemd
systemd-libs-219-19.el7.x86_64
systemd-219-19.el7.x86_64
systemd-sysv-219-19.el7.x86_64
[root#unknown ~]#
The return code of reload is incorrect.
[root#unknown ~]# service haproxy status
Redirecting to /bin/systemctl status haproxy.service
●haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled; vendor preset: disabled)
Active: active (running) since Tue 2016-06-07 11:24:41 UTC; 4s ago
[root#unknown ~]#
[root#unknown ~]#
[root#unknown ~]# more /etc/haproxy/haproxy.cfg
XXXX **--> I added an incorrect keyword into haproxy.cfg**
Global
....
[root#unknown ~]#
[root#unknown ~]#
[root#unknown ~]# service haproxy reload
Redirecting to /bin/systemctl reload haproxy.service
[root#unknown ~]#
[root#unknown ~]# echo $?
0 **--> It was sucessful !!!**
[root#unknown ~]#
[root#unknown ~]# service haproxy status
Redirecting to /bin/systemctl status haproxy.service
● haproxy.service - HAProxy Load Balancer
Loaded: loaded (/usr/lib/systemd/system/haproxy.service; disabled; vendor preset: disabled)
Active: active (running) since Tue 2016-06-07 11:24:41 UTC; 21s ago
Process: 16507 ExecReload=/bin/kill -USR2 $MAINPID (code=exited, status=0/SUCCESS)
Main PID: 16464 (haproxy-systemd)
CGroup: /system.slice/haproxy.service
tq16464 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid
tq16465 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
mq16466 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds
Jun 07 11:24:41 unknown systemd[1]: Started HAProxy Load Balancer.
Jun 07 11:24:41 unknown systemd[1]: Starting HAProxy Load Balancer...
Jun 07 11:24:41 unknown haproxy-systemd-wrapper[16464]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/haproxy/haproxy...id -Ds
Jun 07 11:24:57 unknown systemd[1]: Reloaded HAProxy Load Balancer.
Jun 07 11:24:57 unknown haproxy-systemd-wrapper[16464]: haproxy-systemd-wrapper: re-executing
Jun 07 11:24:57 unknown haproxy-systemd-wrapper[16464]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /etc/haproxy/haproxy... 16466
Jun 07 11:24:57 unknown haproxy-systemd-wrapper[16464]: [ALERT] 158/112457 (16508) : parsing [/etc/haproxy/haproxy.cfg:9]: unknown k...ction. **--> In fact, it was wrong**
Jun 07 11:24:57 unknown haproxy-systemd-wrapper[16464]: [ALERT] 158/112457 (16508) : Error(s) found in configuration file : /etc/hap...xy.cfg
Jun 07 11:24:57 unknown haproxy-systemd-wrapper[16464]: [ALERT] 158/112457 (16508) : Fatal errors found in configuration.
So I descided to use the sysv init.d script to start/reload/stop HAproxy service.
sysv init.d script:
[root#unknown ~]# cat /etc/init.d/haproxy
#!/bin/sh
#
# chkconfig: - 85 15
# description: HA-Proxy is a TCP/HTTP reverse proxy which is particularly suited \
# for high availability environments.
# processname: haproxy
# config: /etc/haproxy/haproxy.cfg
# pidfile: /var/run/haproxy.pid
# Script Author: Simon Matter <simon.matter#invoca.ch>
# Version: 2004060600
### BEGIN INIT INFO
# Provides: HA-Proxy
# Required-Start: $network $syslog sshd
# Required-Stop:
# Default-Start: 3 4 5
# Default-Stop: 0 1 2 6
# Short-Description: HAProxy
### END INIT INFO
# Source function library.
if [ -f /etc/init.d/functions ]; then
. /etc/init.d/functions
elif [ -f /etc/rc.d/init.d/functions ] ; then
. /etc/rc.d/init.d/functions
else
exit 0
fi
# Source networking configuration.
. /etc/sysconfig/network
# Check that networking is up.
#[ ${NETWORKING} = "no" ] && exit 0
# This is our service name
BASENAME=`basename $0`
if [ -L $0 ]; then
BASENAME=`find $0 -name $BASENAME -printf %l`
BASENAME=`basename $BASENAME`
fi
[ -f /etc/$BASENAME/$BASENAME.cfg ] || exit 1
RETVAL=0
start() {
/usr/sbin/$BASENAME -c -q -f /etc/$BASENAME/$BASENAME.cfg
if [ $? -ne 0 ]; then
echo "Errors found in configuration file, check it with '$BASENAME check'."
return 1
fi
echo -n "Starting $BASENAME: "
daemon /usr/sbin/$BASENAME -D -f /etc/$BASENAME/$BASENAME.cfg -p /var/run/$BASENAME.pid
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/$BASENAME
return $RETVAL
}
stop() {
killproc $BASENAME -USR1
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/$BASENAME
[ $RETVAL -eq 0 ] && rm -f /var/run/$BASENAME.pid
return $RETVAL
}
restart() {
/usr/sbin/$BASENAME -c -q -f /etc/$BASENAME/$BASENAME.cfg
if [ $? -ne 0 ]; then
echo "Errors found in configuration file, check it with '$BASENAME check'."
return 1
fi
stop
start
}
reload() {
/usr/sbin/$BASENAME -c -q -f /etc/$BASENAME/$BASENAME.cfg
if [ $? -ne 0 ]; then
echo "Errors found in configuration file, check it with '$BASENAME check'."
return 1
fi
/usr/sbin/$BASENAME -D -f /etc/$BASENAME/$BASENAME.cfg -p /var/run/$BASENAME.pid -sf $(cat /var/run/$BASENAME.pid)
}
check() {
/usr/sbin/$BASENAME -c -q -V -f /etc/$BASENAME/$BASENAME.cfg
}
rhstatus() {
status $BASENAME
}
condrestart() {
[ -e /var/lock/subsys/$BASENAME ] && restart || :
}
# See how we were called.
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
reload)
reload
;;
condrestart)
condrestart
;;
status)
rhstatus
;;
check)
check
;;
*)
echo $"Usage: $BASENAME {start|stop|restart|reload|condrestart|status|check}"
exit 1
esac
exit $?
When I reloaded HAproxy serivce with the correct haproxy.cfg,
the command(service haproxy reload) returned 0,
but HAproxy's status became failed.
[root#unknown ~]# service haproxy status
● haproxy.service - LSB: HAProxy
Loaded: loaded (/etc/rc.d/init.d/haproxy)
Active: active (running) since Tue 2016-06-07 11:33:22 UTC; 1h 14min ago
Docs: man:systemd-sysv-generator(8)
Process: 16636 ExecStart=/etc/rc.d/init.d/haproxy start (code=exited, status=0/SUCCESS)
Main PID: 16641 (haproxy)
CGroup: /system.slice/haproxy.service
mq16641 /usr/sbin/haproxy -D -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid
Jun 07 11:33:22 unknown systemd[1]: Starting LSB: HAProxy...
Jun 07 11:33:22 unknown haproxy[16636]: Starting haproxy: [ OK ]
Jun 07 11:33:22 unknown systemd[1]: Started LSB: HAProxy.
[root#unknown ~]#
[root#unknown ~]# service haproxy reload
Reloading haproxy configuration (via systemctl): [ OK ]
[root#unknown ~]# echo $?
0 **--> It was sucessful !!!**
[root#unknown ~]#
[root#unknown ~]# service haproxy status
● haproxy.service - LSB: HAProxy
Loaded: loaded (/etc/rc.d/init.d/haproxy)
Active: failed (Result: signal) since Tue 2016-06-07 12:48:01 UTC; 1s ago
Docs: man:systemd-sysv-generator(8)
Process: 16869 ExecStop=/etc/rc.d/init.d/haproxy stop (code=exited, status=0/SUCCESS)
Process: 16863 ExecReload=/etc/rc.d/init.d/haproxy reload (code=exited, status=0/SUCCESS)
Process: 16636 ExecStart=/etc/rc.d/init.d/haproxy start (code=exited, status=0/SUCCESS)
Main PID: 16868 (code=killed, signal=KILL)
Jun 07 11:33:22 unknown systemd[1]: Starting LSB: HAProxy...
Jun 07 11:33:22 unknown haproxy[16636]: Starting haproxy: [ OK ]
Jun 07 11:33:22 unknown systemd[1]: Started LSB: HAProxy.
Jun 07 12:48:00 unknown systemd[1]: Reloaded LSB: HAProxy.
Jun 07 12:48:00 unknown systemd[1]: haproxy.service: main process exited, code=killed, status=9/KILL **--> It was killed ,but I don't know which process killed it, Cgroup ?**
Jun 07 12:48:01 unknown haproxy[16869]: [FAILED]
Jun 07 12:48:01 unknown systemd[1]: Unit haproxy.service entered failed state.
Jun 07 12:48:01 unknown systemd[1]: haproxy.service failed.
[root#unknown ~]#
I used a newer systemd to get the detailed logs
Jun 07 13:02:59 elb systemd[1]: Starting LSB: HAProxy...
Jun 07 13:02:59 elb systemd[7010]: Executing: /etc/rc.d/init.d/haproxy start
Jun 07 13:02:59 elb haproxy[7010]: Starting haproxy: [ OK ]
Jun 07 13:02:59 elb systemd[1]: Child 7010 belongs to haproxy.service
Jun 07 13:02:59 elb systemd[1]: haproxy.service: control process exited, code=exited status=0
Jun 07 13:02:59 elb systemd[1]: haproxy.service got final SIGCHLD for state start
Jun 07 13:02:59 elb systemd[1]: Main PID loaded: 7015
Jun 07 13:02:59 elb systemd[1]: haproxy.service changed start -> running
Jun 07 13:02:59 elb systemd[1]: Job haproxy.service/start finished, result=done
Jun 07 13:02:59 elb systemd[1]: Started LSB: HAProxy. **--> start HAproxy successfully **
Jun 07 13:03:27 elb systemd[1]: Trying to enqueue job haproxy.service/reload/replace
Jun 07 13:03:27 elb systemd[1]: Installed new job haproxy.service/reload as 9504
Jun 07 13:03:27 elb systemd[1]: Enqueued job haproxy.service/reload as 9504
Jun 07 13:03:27 elb systemd[1]: About to execute: /etc/rc.d/init.d/haproxy reload
Jun 07 13:03:27 elb systemd[1]: Forked /etc/rc.d/init.d/haproxy as 7060
Jun 07 13:03:27 elb systemd[1]: haproxy.service changed running -> reload
Jun 07 13:03:27 elb systemd[7060]: Executing: /etc/rc.d/init.d/haproxy reload
Jun 07 13:03:27 elb systemd[1]: Child 7015 belongs to haproxy.service
Jun 07 13:03:27 elb systemd[1]: Main PID changing: 7015 -> 7065
Jun 07 13:03:27 elb systemd[1]: Child 7060 belongs to haproxy.service
Jun 07 13:03:27 elb systemd[1]: haproxy.service: control process exited, code=exited status=0
Jun 07 13:03:27 elb systemd[1]: haproxy.service got final SIGCHLD for state reload
Jun 07 13:03:27 elb systemd[1]: haproxy.service changed reload -> running
Jun 07 13:03:27 elb systemd[1]: Job haproxy.service/reload finished, result=done
Jun 07 13:03:27 elb systemd[1]: Reloaded LSB: HAProxy. **--> successful to reload HAproxy**
Jun 07 13:03:27 elb systemd[1]: Child 7065 belongs to haproxy.service
Jun 07 13:03:27 elb systemd[1]: haproxy.service: main process exited, code=killed, status=9/KILL **--> process 7065 has been killed unexpectly**
Jun 07 13:03:27 elb systemd[1]: haproxy.service changed running -> failed
Jun 07 13:03:27 elb systemd[1]: Unit haproxy.service entered failed state.
Jun 07 13:03:27 elb systemd[1]: haproxy.service failed.
Jun 07 13:03:27 elb systemd[1]: haproxy.service: cgroup is empty **-->Did cgroup killed process 7065? Is it a bug of systemd? **
In CentOS7.1, I use the sysv init script (pls see above) to reload haproxy ,and 'service haproxy reload' command could return correct result.
I don't know what is wrong in CentOS7.2. I just want to get following results of reloading HAproxy.
When haproxy.cfg file is incorrect, 'service haproxy reload' command returns 1
When haproxy.cfg file is correct, 'service haproxy reload' command returns 0
Can anyone help me ? Thanks
I would guess this is a SELinux issue. you can try change for below:
:
vi /etc/selinux/config
SELINUX=enforcing
SELINUXTYPE=targeted
SELINUX=disabled
:wq! #save and quit