Error daemonizing celery in ubuntu 16.04 - celery

I am trying to daemonize the celery worker for my django application. But I am facing the following error on checking celery status:
Starting Celery Service...
celery.service: Control process exited, code=exited status=127
Failed to start Celery Service.
celery.service: Unit entered failed state.
celery.service: Failed with result 'exit-code'.
The /etc/default/celeryd file code:
CELERYD_NODES="worker1 worker2 worker3"
CELERY_BIN="/usr/local/bin/celery"
CELERY_APP="djangoapp"
CELERYD_CHDIR="/home/djangoapp/"
CELERYD_OPTS="--time-limit=300 --concurrency=4"
CELERYD_LOG_LEVEL="INFO"
CELERYD_LOG_FILE="/var/celery/log/%n%I.log"
CELERYD_PID_FILE="/var/celery/run/%n.pid"
CELERYD_USER="nobody"
CELERYD_GROUP="www-data"
CELERY_CREATE_DIRS=1
The /etc/systemd/system/celery.service code:
[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=nobody
Group=www-data
EnvironmentFile=-/etc/conf.d/celery
WorkingDirectory=/home/celery
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
--pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
[Install]
WantedBy=multi-user.target
I have been trying to solve it for the last one hour but I am unable to solve it.
Someone please explain to me the reason behind the error and how to solve it.

Related

CentOS 8 systemctl not finding service

systemctl start adstichr Failed to start adstichr.service: Unit adstichr.service not found.
So i have made the following code inside
/etc/systemd/system
adstichr.service
[Unit]
Description=AdStichr Player
After=network.target
StartLimitIntervalSec=0
[Service]
Type=simple
Restart=always
RestartSec=1
User=root
WorkingDirectory=/home/adstichrplayer
ExecStart=/usr/bin/node app.js
ExecStop=/bin/kill -INT $MAINPID
[Install]
WantedBy=multi-user.target
However when I run it am getting it can't be found. Wondering how do I get this to work as it is working on my ubuntu server.

Not able to start Kafka-Connect as a service on CentOS 7

I have a Kafka environment (Zookeeper + Kafka Server + Kafka-Connect) which runs perfectly when I use command line to start each individual components on CentOS 7.
Now I am setting up these Kafka components to run as a service. For this I have created .service files and placed it in /etc/systemd/system folder. Following are the files
zookeeper.service
#!/bin/bash
# vi /etc/systemd/system/zookeeper.service
[Unit]
Description=This service will start Zookeeper server which will be used by Kafka Server.
Requires=network.target remote-fs.target
After=network.target remote-fs.target
[Service]
Type=simple
ExecStart=/opt/interactcrm/kafka_2.11-1.0.1/bin/zookeeper-server-start.sh /opt/interactcrm/kafka_2.11-1.0.1/config/zookeeper.properties
ExecStop=/opt/interactcrm/kafka_2.11-1.0.1/bin/zookeeper-server-stop.sh
TimeoutStartSec=0
Restart=on-abnormal
[Install]
WantedBy=multi-user.target
kafka.service
#!/bin/bash
# vi /etc/systemd/system/kafka.service
[Unit]
Description=This service will start Kafka server.
Requires=zookeeper.service
After=zookeeper.service
[Service]
Type=simple
ExecStart=/opt/interactcrm/kafka_2.11-1.0.1/bin/kafka-server-start.sh /opt/interactcrm/kafka_2.11-1.0.1/config/server.properties
ExecStop=/opt/interactcrm/kafka_2.11-1.0.1/bin/kafka-server-stop.sh
TimeoutStartSec=0
Restart=on-abnormal
[Install]
WantedBy=multi-user.target
Kafka-connect.service
#!/bin/bash
# vi /etc/systemd/system/kafkaconnect.service
[Unit]
Description=This service will start Kafka Connect Service.
Requires=network.target remote-fs.target nss-lookup.target kafka.service kafka.service
After=network.target remote-fs.target nss-lookup.target kafka.service
[Service]
Type=forking
Environment="KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=10040 -Dcom.sun.management.jmxremote.local.only=true -Dcom.sun.management.jmxremote.authenticate=false"
Environment="LOG_DIR=/var/log/kafka-logs"
ExecStart=/opt/interactcrm/kafka_2.11-1.0.1/bin/connect-distributed.sh /opt/interactcrm/kafka_2.11-1.0.1/config/connect-distributed.properties
TimeoutStartSec=1000
#Restart=on-abnormal
#SuccessExitStatus=143
[Install]
WantedBy=multi-user.target
Zookeeper and Kafka services starts without any issue. I can create topics and then do operations on the topic. The issue is with Kafka connect service.
When I try to start the service using systemctl command, the service does not start. It gets stuck no following log ::
Oct 19 18:29:20 localhost.localdomain connect-distributed.sh[1071]: [2018-10-19 18:29:20,713] INFO Added plugin 'io.debezium.connector.mysql.MySqlConnector...er:136)
Oct 19 18:29:20 localhost.localdomain connect-distributed.sh[1071]: [2018-10-19 18:29:20,713] INFO Added plugin 'io.debezium.transforms.ByLogicalTableRoute...er:136)
Oct 19 18:29:20 localhost.localdomain connect-distributed.sh[1071]: [2018-10-19 18:29:20,713] INFO Added plugin 'io.debezium.transforms.UnwrapFromEnvelope'...er:136)
Oct 19 18:29:20 localhost.localdomain connect-distributed.sh[1071]: [2018-10-19 18:29:20,761] INFO Loading plugin from: /opt/interactcrm/debezium/debezium ...er:184)
Oct 19 18:29:28 localhost.localdomain connect-distributed.sh[1071]: [2018-10-19 18:29:28,725] INFO Registered loader: PluginClassLoader{pluginLocation=file...er:207)
I cannot find any log for this process in message logs after this line and there is no error in any other logs. The process gets stuck on this line ::
INFO Registered loader: PluginClassLoader{pluginLocation=file...er:207)
No matter how much I increase the timeout this process never starts. But when I run the same command from command line, the service starts properly.
I have tried to remove all connectors from Plugin path to see if the service start but it gets stuck on the same line.
Following is my reference point ::
Kafka-Connect Service
I faced the same problem on Debain 9. Figure it out it was because the service need a WorkingDirectory otherwise kafka-connect never fully charges.
So your service should look like this:
#!/bin/bash
# vi /etc/systemd/system/kafkaconnect.service
[Unit]
Description=This service will start Kafka Connect Service.
Requires=network.target remote-fs.target nss-lookup.target kafka.service kafka.service
After=network.target remote-fs.target nss-lookup.target kafka.service
[Service]
Type=forking
Environment="KAFKA_JMX_OPTS=-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=10040 -Dcom.sun.management.jmxremote.local.only=true -Dcom.sun.management.jmxremote.authenticate=false"
Environment="LOG_DIR=/var/log/kafka-logs"
WorkingDirectory="/opt/interactcrm/kafka_2.11-1.0.1" <--- or whatever directory you to use
ExecStart=/opt/interactcrm/kafka_2.11-1.0.1/bin/connect-distributed.sh /opt/interactcrm/kafka_2.11-1.0.1/config/connect-distributed.properties
TimeoutStartSec=1000
#Restart=on-abnormal
#SuccessExitStatus=143
[Install]
WantedBy=multi-user.target
** Below configuration worked for me in Ubuntu **
[Unit]
Requires=kafka.service
After=kafka.service
[Service]
Type=simple
User=kafka
ExecStart=/bin/sh -c '/home/kafka/kafka/bin/connect-distributed.sh /home/kafka/kafka/config/connect-distributed.properties > /home/kafka/kafka/kafka_connect.log 2>&1'
Restart=on-abnormal
[Install]
WantedBy=multi-user.target

celery daemon using systemd not running

i am using ubuntu 16 and systemd for running celery as a daemon.i have created the unit file also but i am not able to run celery as a service. why is this error?
/etc/systemd/system/celery.service
[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=celery
Group=celery
EnvironmentFile=-/etc/default/celery
WorkingDirectory=/srv/weaver/src
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
--pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
[Install]
WantedBy=multi-user.target
file at /etc/default/celery
ENABLED="true"
CELERYD_NODES="worker1"
#CELERYD_NODES="worker1 worker2 worker3"
CELERY_BIN="/usr/local/bin/celery"
CELERY_APP="main:celery_app"
CELERYD_CHDIR="/srv/weaver/src"
CELERYD_OPTS=" --queue=weaver --time-limit=100000 --concurrency=2"
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery2/%N.pid"
CELERYD_USER="celery"
CELERYD_GROUP="celery"
CELERY_CREATE_DIRS=1
# Change Celery Beat
CELERYBEAT_CHDIR="/srv/weaver/src"
# Log files
CELERYBEAT_LOG_FILE="/var/log/celery/celerybeat.log"
# Celery Beat Log files
CELERYBEAT_PID_FILE="/var/run/celery/celerybeat.pid"
# Scheduler for celery
CELERYBEAT_OPTS=" --pidfile=/var/run/celery/celerybeat.pid --sch
OUTPUT OF RUNNING SERVICE
● celery.service - Celery Service
Loaded: loaded (/etc/systemd/system/celery.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2017-01-12 17:12:32 IST; 2min 17s ago
Process: 18561 ExecStop=/bin/sh -c ${CELERY_BIN} multi stopwait ${CELERYD_NODES} --pidfile=${CELERYD_PID_FILE} (code=exited, status=0/SUCCESS)
Process: 18540 ExecStart=/bin/sh -c ${CELERY_BIN} multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FIL
Main PID: 18555 (code=exited, status=1/FAILURE)
Jan 12 17:12:30 fb01 systemd[1]: Starting Celery Service...
Jan 12 17:12:31 fb01 sh[18540]: celery multi v3.1.23 (Cipater)
Jan 12 17:12:31 fb01 sh[18540]: > Starting nodes...
Jan 12 17:12:31 fb01 sh[18540]: > worker1#fb01: OK
Jan 12 17:12:31 fb01 systemd[1]: Started Celery Service.
Jan 12 17:12:32 fb01 systemd[1]: celery.service: Main process exited, code=exited, status=1/FAILURE
Jan 12 17:12:32 fb01 sh[18561]: celery multi v3.1.23 (Cipater)
Jan 12 17:12:32 fb01 sh[18561]: > worker1#fb01: DOWN
Jan 12 17:12:32 fb01 systemd[1]: celery.service: Unit entered failed state.
Jan 12 17:12:32 fb01 systemd[1]: celery.service: Failed with result 'exit-code'.
I just hit this exact issue. My problem was a configuration issue. In particular, I wasn't setting
CELERYD_LOG_LEVEL
in my environment file. (/etc/default/celeryd in your case). It looks like you have made the same mistake.
(I also had a few other configuration issues that I needed to resolve. I discovered these by running celery on the commandline.)
I had this same symptom, turned out to be permissions.
In my case something like:
chmod o+x /srv/weaver/src
sorted it.
Note: That this is not the best way to enable the required permission but that's not pertinent to this answer.

Cannot create systemd script for Puma

I created a service script named "puma.service" in /etc/systemd/system/ with the following contents:
[Unit]
Description=Puma HTTP Server
After=network.target
[Service]
Type=simple
User=ubuntu
WorkingDirectory=/home/username/appdir/current
ExecStart=/bin/bash -lc "/home/username/appdir/current/sbin/puma -C /home/username/appdir/current/config/puma.rb /home/username/appdir/current/config.ru"
Restart=always
[Install]
WantedBy=multi-user.target
I enabled the service and when started, I'm getting the following log from systemctl:
● puma.service - Puma HTTP Server
Loaded: loaded (/etc/systemd/system/puma.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Wed 2016-12-14 10:09:46 UTC; 12min ago
Process: 16889 ExecStart=/bin/bash -lc cd /home/username/appdir/current && bundle exec puma -C /home/username/appdir..
Main PID: 16889 (code=exited, status=127)
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: puma.service: Main process exited, code=exited, status=127/n/a
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: puma.service: Unit entered failed state.
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: puma.service: Failed with result 'exit-code'.
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: puma.service: Service hold-off time over, scheduling restart.
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: Stopped Puma HTTP Server.
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: puma.service: Start request repeated too quickly.
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: Failed to start Puma HTTP Server.
Although, when I give the command in SSH terminal the server started and was running perfect. Is there any changes I have to make in the service file?
Note:
I have changed the dirnames for your convenience.
I did some research and the cause of status 127 is due to the executable not in the path. But, I guess that won't be a problem.
Can you shed some light?
I found the problem and changed the ExecStart as mentioned below and it worked like a charm:
ExecStart=/home/username/.rbenv/shims/bundle exec puma -e production -C ./config/puma.rb config.ru
PIDFile=/home/username/appdir/shared/tmp/pids/puma.pid
bundle should be taken from the rbenv shims and also the puma's config file (config/puma.rb) and application's config file (config.ru) can be given in relative path.
One way to solve it is to specify a PID file and systemd will take a look at that file to check on service status.
Here's how we use this in our scripts (adapted to your given sample)
ExecStart=/bin/bash -lc '/home/username/appdir/current/sbin/puma -C /home/username/appdir/current/config/puma.rb /home/username/appdir/current/config.ru --pidfile /home/username/appdir/current/tmp/pids/puma.pid'
PIDFile=/home/username/appdir/current/tmp/pids/puma.pid
Take note that you might have to configure --pidfile via your -C puma.rb file instead of passing it an as parameter. I'm just showing it here to illustrate that --pidfile (in puma config) should be the same as PIDFile in the service file.
As for why the error message is that way, I'm not sure myself and is interested in the answer too.
For rvm users try with
[Unit]
Description=Puma HTTP Server
After=network.target
[Service]
Type=simple
User=user_name
Group=user_name
WorkingDirectory=/home/user_name/apps/app_name/current
Environment=RAILS_ENV=production
ExecStart=/home/user_name/.rvm/bin/rvm ruby-3.1.2#app_name do bundle exec --keep-file-descriptors puma -C /home/user_name/apps/app_name/current/config/puma/production.rb
ExecReload=/bin/kill -USR1 $MAINPID
StandardOutput=append:/home/user_name/apps/app_name/current/log/puma_access.log
StandardError=append:/home/user_name/apps/app_name/current/log/puma_error.log
SyslogIdentifier=app_name-puma
Restart=always
KillMode=process
[Install]
WantedBy=multi-user.target
if you are not using a gemset change
ExecStart=/home/user_name/.rvm/bin/rvm ruby-3.1.2#app_name ....
to
ExecStart=/home/user_name/.rvm/bin/rvm default

systemd: Pass start/stop to service

I am trying to create an systemd init script for starting and stopping the softether VPN server.
A tutorial I found suggests following init.d script.
#!/bin/sh
# chkconfig: 2345 99 01
# description: SoftEther VPN Server
DAEMON=/usr/local/vpnserver/vpnserver
LOCK=/var/lock/subsys/vpnserver
test -x $DAEMON || exit 0
case "$1" in
start)
$DAEMON start
touch $LOCK
;;
stop)
$DAEMON stop
rm $LOCK
;;
restart)
$DAEMON stop
sleep 3
$DAEMON start
;;
*)
echo "Usage: $0 {start|stop|restart}"
exit 1
esac
exit 0
But I'd like to use systemd, so I wrote following service file.
[Unit]
Description=Softether VPN server
After=syslog.target
After=network.target
[Service]
Type=simple
ExecStart=/usr/local/vpnserver/vpnserver start
ExecStop=/usr/local/vpnserver/vpnserver stop
# Give a reasonable amount of time for the server to start up/shut down
TimeoutSec=300
[Install]
WantedBy=multi-user.target
But this script does not keep the VPN server running. sudo systemctl status softethervpn gives me following status.
● softethervpn.service - Softether VPN server
Loaded: loaded (/lib/systemd/system/softethervpn.service; disabled)
Active: deactivating (stop) since Mon 2016-04-18 19:11:41 CEST; 1s ago
Process: 1463 ExecStart=/usr/local/vpnserver/vpnserver start (code=exited, status=0/SUCCESS)
Main PID: 1463 (code=exited, status=0/SUCCESS); : 1474 (vpnserver)
CGroup: /system.slice/softethervpn.service
├─1471 /usr/local/vpnserver/vpnserver execsvc
└─control
└─1474 /usr/local/vpnserver/vpnserver stop
Apr 18 19:11:40 raspberrypi systemd[1]: Started Softether VPN server.
Apr 18 19:11:41 raspberrypi vpnserver[1463]: The SoftEther VPN Server service has been started.
Apr 18 19:11:42 raspberrypi vpnserver[1474]: Stopping the SoftEther VPN Server service ...
Apr 18 19:11:42 raspberrypi vpnserver[1474]: SoftEther VPN Server service has been stopped.
How do I need to correct my service file to work correctly?
It seems that the Type needs to be forking. Following script works for me (found at SoftEther Configurationfile for Systemd).
[Unit]
Description=SoftEther VPN Server
After=network.target
[Service]
Type=forking
ExecStart=/usr/local/vpnserver/vpnserver start
ExecStop=/usr/local/vpnserver/vpnserver stop
[Install]
WantedBy=multi-user.target
for SoftEther this works
[Unit]
Description=SoftEther VPN Server
After=network.target auditd.service
[Service]
Type=forking
TasksMax=infinity
EnvironmentFile=-/usr/local/vpnserver
ExecStart=/usr/local/vpnserver/vpnserver start
ExecStop=/usr/local/vpnserver/vpnserver stop
KillMode=process
Restart=on-failure
# Hardening
PrivateTmp=yes
ProtectHome=yes
ProtectSystem=full
ReadOnlyDirectories=/
ReadWriteDirectories=-/usr/local/vpnserver
[Install]
WantedBy=multi-user.target
It is the official service for SoftEther, expect this line has been removed
CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SYS_NICE CAP_SYSLOG CAP_SETUID
which caused some error for me, Ex,
-- Alert: SoftEther VPN Kernel --
Unable to create /usr/local/vpnserver/.VPN-49BDCFFA14.
-- Alert: SoftEther VPN Kernel --
Unable to create /usr/local/vpnserver/.VPN-49BDCFFA14.