CentOS 8 systemctl not finding service - centos

systemctl start adstichr Failed to start adstichr.service: Unit adstichr.service not found.
So i have made the following code inside
/etc/systemd/system
adstichr.service
[Unit]
Description=AdStichr Player
After=network.target
StartLimitIntervalSec=0
[Service]
Type=simple
Restart=always
RestartSec=1
User=root
WorkingDirectory=/home/adstichrplayer
ExecStart=/usr/bin/node app.js
ExecStop=/bin/kill -INT $MAINPID
[Install]
WantedBy=multi-user.target
However when I run it am getting it can't be found. Wondering how do I get this to work as it is working on my ubuntu server.

Related

Not able to start Kafka-Connect as a service on CentOS 7

I have a Kafka environment (Zookeeper + Kafka Server + Kafka-Connect) which runs perfectly when I use command line to start each individual components on CentOS 7.
Now I am setting up these Kafka components to run as a service. For this I have created .service files and placed it in /etc/systemd/system folder. Following are the files
zookeeper.service
#!/bin/bash
# vi /etc/systemd/system/zookeeper.service
[Unit]
Description=This service will start Zookeeper server which will be used by Kafka Server.
Requires=network.target remote-fs.target
After=network.target remote-fs.target
[Service]
Type=simple
ExecStart=/opt/interactcrm/kafka_2.11-1.0.1/bin/zookeeper-server-start.sh /opt/interactcrm/kafka_2.11-1.0.1/config/zookeeper.properties
ExecStop=/opt/interactcrm/kafka_2.11-1.0.1/bin/zookeeper-server-stop.sh
TimeoutStartSec=0
Restart=on-abnormal
[Install]
WantedBy=multi-user.target
kafka.service
#!/bin/bash
# vi /etc/systemd/system/kafka.service
[Unit]
Description=This service will start Kafka server.
Requires=zookeeper.service
After=zookeeper.service
[Service]
Type=simple
ExecStart=/opt/interactcrm/kafka_2.11-1.0.1/bin/kafka-server-start.sh /opt/interactcrm/kafka_2.11-1.0.1/config/server.properties
ExecStop=/opt/interactcrm/kafka_2.11-1.0.1/bin/kafka-server-stop.sh
TimeoutStartSec=0
Restart=on-abnormal
[Install]
WantedBy=multi-user.target
Kafka-connect.service
#!/bin/bash
# vi /etc/systemd/system/kafkaconnect.service
[Unit]
Description=This service will start Kafka Connect Service.
Requires=network.target remote-fs.target nss-lookup.target kafka.service kafka.service
After=network.target remote-fs.target nss-lookup.target kafka.service
[Service]
Type=forking
Environment="KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=10040 -Dcom.sun.management.jmxremote.local.only=true -Dcom.sun.management.jmxremote.authenticate=false"
Environment="LOG_DIR=/var/log/kafka-logs"
ExecStart=/opt/interactcrm/kafka_2.11-1.0.1/bin/connect-distributed.sh /opt/interactcrm/kafka_2.11-1.0.1/config/connect-distributed.properties
TimeoutStartSec=1000
#Restart=on-abnormal
#SuccessExitStatus=143
[Install]
WantedBy=multi-user.target
Zookeeper and Kafka services starts without any issue. I can create topics and then do operations on the topic. The issue is with Kafka connect service.
When I try to start the service using systemctl command, the service does not start. It gets stuck no following log ::
Oct 19 18:29:20 localhost.localdomain connect-distributed.sh[1071]: [2018-10-19 18:29:20,713] INFO Added plugin 'io.debezium.connector.mysql.MySqlConnector...er:136)
Oct 19 18:29:20 localhost.localdomain connect-distributed.sh[1071]: [2018-10-19 18:29:20,713] INFO Added plugin 'io.debezium.transforms.ByLogicalTableRoute...er:136)
Oct 19 18:29:20 localhost.localdomain connect-distributed.sh[1071]: [2018-10-19 18:29:20,713] INFO Added plugin 'io.debezium.transforms.UnwrapFromEnvelope'...er:136)
Oct 19 18:29:20 localhost.localdomain connect-distributed.sh[1071]: [2018-10-19 18:29:20,761] INFO Loading plugin from: /opt/interactcrm/debezium/debezium ...er:184)
Oct 19 18:29:28 localhost.localdomain connect-distributed.sh[1071]: [2018-10-19 18:29:28,725] INFO Registered loader: PluginClassLoader{pluginLocation=file...er:207)
I cannot find any log for this process in message logs after this line and there is no error in any other logs. The process gets stuck on this line ::
INFO Registered loader: PluginClassLoader{pluginLocation=file...er:207)
No matter how much I increase the timeout this process never starts. But when I run the same command from command line, the service starts properly.
I have tried to remove all connectors from Plugin path to see if the service start but it gets stuck on the same line.
Following is my reference point ::
Kafka-Connect Service
I faced the same problem on Debain 9. Figure it out it was because the service need a WorkingDirectory otherwise kafka-connect never fully charges.
So your service should look like this:
#!/bin/bash
# vi /etc/systemd/system/kafkaconnect.service
[Unit]
Description=This service will start Kafka Connect Service.
Requires=network.target remote-fs.target nss-lookup.target kafka.service kafka.service
After=network.target remote-fs.target nss-lookup.target kafka.service
[Service]
Type=forking
Environment="KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=10040 -Dcom.sun.management.jmxremote.local.only=true -Dcom.sun.management.jmxremote.authenticate=false"
Environment="LOG_DIR=/var/log/kafka-logs"
WorkingDirectory="/opt/interactcrm/kafka_2.11-1.0.1" <--- or whatever directory you to use
ExecStart=/opt/interactcrm/kafka_2.11-1.0.1/bin/connect-distributed.sh /opt/interactcrm/kafka_2.11-1.0.1/config/connect-distributed.properties
TimeoutStartSec=1000
#Restart=on-abnormal
#SuccessExitStatus=143
[Install]
WantedBy=multi-user.target
** Below configuration worked for me in Ubuntu **
[Unit]
Requires=kafka.service
After=kafka.service
[Service]
Type=simple
User=kafka
ExecStart=/bin/sh -c '/home/kafka/kafka/bin/connect-distributed.sh /home/kafka/kafka/config/connect-distributed.properties > /home/kafka/kafka/kafka_connect.log 2>&1'
Restart=on-abnormal
[Install]
WantedBy=multi-user.target

How to enable services in beaglebone black?

[Unit]
Description=Splash screen
DefaultDependencies=no
[Service]
Type=oneshot
ExecStart=/usr/local/bin/psplash
[Install]
WantedBy=basic.target
job for .service failed because the control process exited with an error code
Here is shell script to make service of python code.
It will start the execution at startup,
[Unit]
Description= Python First Service
After=multi-user.target
[Service]
Type=simple
ExecStart=/usr/bin/python /home/debian/serv_demo.py
Restart=on-abort
[Install]
WantedBy=multi-user.target
I followed this example and it worked well for my BBB:
https://gist.github.com/tstellanova/7323116

Unable to run Mongo daemon

I am not being able to run mongod. I used this command :
sudo service mongodb start
Which gives :
Failed to start mongodb.service: Unit mongodb.service is masked.
The file /etc/systemd/system/mongodb.service is empty. I tried pasting this:
[Unit]
Description=MongoDB Database Service
Wants=network.target
After=network.target
[Service]
ExecStart=/usr/bin/mongod --config /etc/mongod.conf
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
User=mongodb
Group=mongodb
StandardOutput=syslog
StandardError=syslog
[Install]
WantedBy=multi-user.target
into it, but every time I save it (using su), it becomes empty again after closing the file.
Then I used :
sudo service mongod start
(I created mongod.service in /etc/systemd/system/ and put the required code in it.)
It gives this output :
Failed to start mongod.service: Unit mongod.service not found.
I have been stuck at it for 2 hours now. I removed mongodb and installed from scratch but that didn't help either. What is the problem here? I am on Ubuntu 16.04.
For the service, when it is returning an error saying that the service is "masked", try executing sudo systemctl unmask mongodb.
https://askubuntu.com/questions/770054/mongodb-3-2-doesnt-start-on-lubuntu-16-04lts-as-service

how to manage kafka broker by systemd?

I am trying to manager kafka broker by systemd. here is a unit-file:
[Unit]
Description=Kafka with broker id (%i)
After=network.target
After=zk.service
[Service]
Type=simple
SyslogIdentifier=kafka (%i)
WorkingDirectory=/opt/service/kafka_2.11-0.9.0.1
LimitNOFILE=16384:163840
ExecStart=/usr/bin/bash -c 'bin/kafka-server-start.sh /opt/service/units/kafka/%i.properties'
ExecStop=/usr/bin/bash -c 'bin/kafka-server-stop.sh /opt/service/units/kafka/%i.properties'
[Install]
WantedBy=multi-user.target
with that file, I can start kafka by command systemctl --user start kafka#0.service and systemctl --user start kafka#1.service.
But when I try to kill those daemons by systemctl --user stop kafka#0.service, all two daemons are stoped! so, why could not I kill just only one broker?
Something like this:
[Unit]
Description=Kafka with broker id (%i)
After=network.target
After=zk.service
[Service]
Type=forking
SyslogIdentifier=kafka (%i)
Restart=on-failure
LimitNOFILE=16384:163840
ExecStart=/opt/service/kafka_2.11-0.9.0.1/bin/kafka-server-start.sh -daemon /opt/service/units/kafka/%i.properties
[Install]
WantedBy=multi-user.target

ExecStartPost in supervisor?

I am moving from systemd to supervisord.
in systemd service you can write:
[Service]
Type=simple
User=root
ExecStart=mycommand
ExecStartPost=anothercommand
I want to write ExecStartPost in supervisor
[program:myservice]
command=thesamecommand
???command_after=??
autostart=true
autorestart=true
user=root