Error to create a systemd service, with live socket - sockets

I'm trying to create a systemd service on CentOS 7.5, to acces livestatos from remote thru
File proxy-to-livestatus.service:
[Unit]
Requires=naemon.service
After=naemon.service
[Service]
ExecStart=/usr/lib/systemd/systemd-socket-proxyd /run/naemon/live
File proxy-to-livestatus.socket:
[Unit]
StopWhenUnneeded=true
[Socket]
ListenStream=6557
Status:
systemctl status proxy-to-livestatus.service
● proxy-to-livestatus.service
Loaded: loaded (/etc/systemd/system/proxy-to-livestatus.service; static; vendor preset: disabled)
Active: failed (Result: exit-code) since mié 2018-07-18 09:11:58 CEST; 15s ago
Process: 3203 ExecStart=/usr/lib/systemd/systemd-socket-proxyd /run/naemon/live (code=exited, status=1/FAILURE)
Main PID: 3203 (code=exited, status=1/FAILURE)
jul 18 09:11:58 chuwi systemd[1]: Started proxy-to-livestatus.service.
jul 18 09:11:58 chuwi systemd[1]: Starting proxy-to-livestatus.service...
jul 18 09:11:58 chuwi systemd-socket-proxyd[3203]: Didn't get any sockets passed in.
jul 18 09:11:58 chuwi systemd[1]: proxy-to-livestatus.service: main process exited, code=exited, status=1/FAILURE
jul 18 09:11:58 chuwi systemd[1]: Unit proxy-to-livestatus.service entered failed state.
jul 18 09:11:58 chuwi systemd[1]: proxy-to-livestatus.service failed.

Hi to resolve this issue, we haver to enable the socket with --now option
systemctl enable --now proxy-to-livestatus.socket
and the start the proxy-to-livestatus.service
systemctl start systemctl enable --now proxy-to-livestatus.socket
Regards

Related

how to & where to provide port 5666 in nagios

Do I have to add port 5666 on the server-side or client-side???
ankush#Backend-VM:~$ sudo systemctl status nagios-nrpe-server.service
Loaded: loaded (/lib/systemd/system/nagios-nrpe-server.service; enabled; vendor preset: enabled)
Active: **failed** (Result: exit-code) since Thu 2022-03-10 05:13:12 UTC; 17min ago
Docs: http://www.nagios.org/documentation
Main PID: 14663 (code=exited, status=1/FAILURE)
Mar 10 05:13:12 Backend-VM systemd[1]: Started Nagios Remote Plugin Executor.
Mar 10 05:13:12 Backend-VM nrpe[14663]: Starting up daemon
Mar 10 05:13:12 Backend-VM nrpe[14663]: **Bind to port 5666 on 10.1.0.4 failed: Cannot assign requested address**.
Mar 10 05:13:12 Backend-VM nrpe[14663]: **Cannot bind to any address.**
Mar 10 05:13:12 Backend-VM systemd[1]: nagios-nrpe-server.service: Main process exited, code=exited, status=1/FAILURE
Mar 10 05:13:12 Backend-VM systemd[1]: nagios-nrpe-server.service: Failed with result 'exit-code'.

Cannot start mongodb in ubuntu. "Job for mongodb.service failed because the control process exited with error code."

Tried running mongodb on my ubuntu 20.04
sudo systemctl start mongodb
in the shell returns -
Job for mongodb.service failed because the control process exited with error code.
See "systemctl status mongodb.service" and "journalctl -xe" for details.
And sudo systemctl status mongodb returns -
● mongodb.service - LSB: An object/document-oriented database
Loaded: loaded (/etc/init.d/mongodb; generated)
Active: failed (Result: exit-code) since Fri 2022-03-04 11:18:53 IST; 2s ago
Docs: man:systemd-sysv-generator(8)
Process: 82695 ExecStart=/etc/init.d/mongodb start (code=exited, status=1/FAILURE)
Mar 04 11:18:52 hp systemd[1]: Starting LSB: An object/document-oriented database...
Mar 04 11:18:52 hp mongodb[82695]: * Starting database mongodb
Mar 04 11:18:53 hp mongodb[82695]: ...fail!
Mar 04 11:18:53 hp systemd[1]: mongodb.service: Control process exited, code=exited, status=1/FAILURE
Mar 04 11:18:53 hp systemd[1]: mongodb.service: Failed with result 'exit-code'.
Mar 04 11:18:53 hp systemd[1]: Failed to start LSB: An object/document-oriented database.
what do I do?

Cannot start Zookeeper service on CentOS7

When trying to start zookeeper service I get the following
● zookeeper.service
Loaded: loaded (/etc/systemd/system/zookeeper.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2020-04-02 16:19:24 EDT; 5min ago
Process: 5201 ExecStop=/usr/local/kafka/kafka_2.13-2.4.1/bin/zookeeper-server-stop.sh (code=exited, status=1/FAILURE)
Process: 4882 ExecStart=/usr/local/kafka/kafka_2.13-2.4.1/bin/zookeeper-server-start.sh /usr/local/kafka/kafka_2.13-2.4.1/config/zookeeper.properties (code=exited, status=127)
Main PID: 4882 (code=exited, status=127)
Apr 02 16:19:24 centos.localdomain systemd[1]: Started zookeeper.service.
Apr 02 16:19:24 centos.localdomain systemd[1]: zookeeper.service: main process exited, code=exited, status=127/n/a
Apr 02 16:19:24 centos.localdomain systemd[1]: zookeeper.service: control process exited, code=exited status=1
Apr 02 16:19:24 centos.localdomain systemd[1]: Unit zookeeper.service entered failed state.
Apr 02 16:19:24 centos.localdomain systemd[1]: zookeeper.service failed.
The zookeeper.service file is configured as follows
[Unit]
Requires=network.target remote-fs.target
After=network.target remote-fs.target
[Service]
Type=simple
User=specadmin
ExecStart=/usr/local/kafka/kafka_2.13-2.4.1/bin/zookeeper-server-start.sh /usr/local/kafka/kafka_2.13-2.4.1/config/zookeeper.properties
ExecStop=/usr/local/kafka/kafka_2.13-2.4.1/bin/zookeeper-server-stop.sh
Restart=on-abnormal
[Install]
WantedBy=multi-user.target
When trying to run zookeeper manually with the same user configured in the service file everything works fine.
Please advise
Turns out the issue was related to the environment variables systemd uses.
Systemd uses a fixed $PATH variable and the changes that are made to the /etc/profile /etc/bashrc and the like are not applied to systemd.
Zookeeper runs java which needs to be part of the search path, but since systemd doesn't use the files where the search path is set, zookeeper start script couldn't find java.
I solved it by overriding the search path by adding Environment=PATH=... parameter in the zookeeper service file and adding all the required directories.

mongod.service: Failed at step USER spawning /usr/bin/mkdir: No such process

I'm running ec2 instance of os Ubuntu 16.04.
I recently tried to upgrade my mongodb from 3.2 to 3.6.
And I tried to run sudo service mongod start and mongod service failed to start.
Below is the error message.
mongod.service - High-performance, schema-free document-oriented
database
Loaded: loaded (/etc/systemd/system/mongod.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2018-03-05 05:48:12 UTC; 11s ago
Docs: https://docs.mongodb.org/manual
Process: 18587 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=217/USER)
Main PID: 16567 (code=exited, status=100)
Mar 05 05:48:12 ip-172-31-18-34 systemd[1]: Starting High-performance, schema-free document-oriented database...
Mar 05 05:48:12 ip-172-31-18-34 systemd[18587]: mongod.service: Failed at step USER spawning /usr/bin/mkdir: No such proc
Mar 05 05:48:12 ip-172-31-18-34 systemd[1]: mongod.service: Control process exited, code=exited status=217
Mar 05 05:48:12 ip-172-31-18-34 systemd[1]: Failed to start High-performance, schema-free document-oriented database.
Mar 05 05:48:12 ip-172-31-18-34 systemd[1]: mongod.service: Unit entered failed state.
Mar 05 05:48:12 ip-172-31-18-34 systemd[1]: mongod.service: Failed with result 'exit-code'.
And I never edited a single line of default mongod.service file.
How can I fix this issue?
If you have just copied mongod.service file from somewhere you should edit User in [Service] section
Changing ownership of "/var/lib/mongodb" directory worked for me
Command to change ownership:
"sudo chown -R mongodb:mongodb /var/lib/mongodb"
**Old response:**
$ sudo systemctl status mongodb
● mongodb.service - MongoDB Database
Loaded: loaded (/etc/systemd/system/mongodb.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2019-07-31 18:52:49 IST; 17h ago
Docs: https://docs.mongodb.org/manual
Main PID: 13728 (code=exited, status=217/USER)
Jul 31 18:52:49 LAP-LIN-712 systemd[1]: Started MongoDB Database.
Jul 31 18:52:49 LAP-LIN-712 systemd[13728]: mongodb.service: Failed to determine user credentials: No such process
Jul 31 18:52:49 LAP-LIN-712 systemd[13728]: mongodb.service: Failed at step USER spawning /usr/bin/mongod: No such process
Jul 31 18:52:49 LAP-LIN-712 systemd[1]: mongodb.service: Main process exited, code=exited, status=217/USER
Jul 31 18:52:49 LAP-LIN-712 systemd[1]: mongodb.service: Failed with result 'exit-code'.
**New Response:**
$ sudo systemctl status mongodb
● mongodb.service - MongoDB Database
Loaded: loaded (/etc/systemd/system/mongodb.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-08-01 11:54:10 IST; 8s ago
Docs: https://docs.mongodb.org/manual
Main PID: 8143 (mongod)
Tasks: 20 (limit: 4915)
CGroup: /system.slice/mongodb.service
└─8143 /usr/bin/mongod --quiet --config /etc/mongod.conf
/usr/bin/mongod - No such process. ---> This error means you have placed the mongod binary in someother path instead of the default one. Try changing the path in mongod.service file and reattempt it using service start mongodb
For me it was solved by commenting "User" and "Group" directives in the service file as described here.
I hope it helps somebody.

Job for kube-apiserver.service failed because the control process exited with error code

On the beginning i wanted to point out i am fairly new into Linux systems, and totally, totally new with kubernetes so my question may be trivial.
As stated in the title i have problem with setting up the Kubernetes cluster. I am working on the Atomic Host Version: 7.1707 (2017-07-31 16:12:06)
I am following this guide:
http://www.projectatomic.io/docs/gettingstarted/
in addition to that i followed this:
http://www.projectatomic.io/docs/kubernetes/
(to be precise, i ran this command:
rpm-ostree install kubernetes-master --reboot
everything was going fine until this point:
systemctl start etcd kube-apiserver kube-controller-manager kube-scheduler
the problem is with:
systemctl start etcd kube-apiserver
as it gives me back this response:
Job for kube-apiserver.service failed because the control process
exited with error code. See "systemctl status kube-apiserver.service"
and "journalctl -xe" for details.
systemctl status kube-apiserver.service
gives me back:
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Fri 2017-08-25 14:29:56 CEST; 2s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Process: 17876 ExecStart=/usr/bin/kube-apiserver $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBE_ETCD_SERVERS $KUBE_API_ADDRESS $KUBE_API_PORT $KUBELET_PORT $KUBE_ALLOW_PRIV $KUBE_SERVICE_ADDRESSES $KUBE_ADMISSION_CONTROL $KUBE_API_ARGS (code=exited, status=255)
Main PID: 17876 (code=exited, status=255)
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service: main process exited, code=exited, status=255/n/a
Aug 25 14:29:56 master systemd[1]: Failed to start Kubernetes API Server.
Aug 25 14:29:56 master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service failed.
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service holdoff time over, scheduling restart.
Aug 25 14:29:56 master systemd[1]: start request repeated too quickly for kube-apiserver.service
Aug 25 14:29:56 master systemd[1]: Failed to start Kubernetes API Server.
Aug 25 14:29:56 master systemd[1]: Unit kube-apiserver.service entered failed state.
Aug 25 14:29:56 master systemd[1]: kube-apiserver.service failed.
I have no clue where to start and i will be more than thankful for any advices.
It turned out to be a typo in /etc/kubernetes/config. I misunderstood the "# Comma separated list of nodes in the etcd cluster".
Idk how to close the thread or anything.