Service starting gunicorn failing with "Start request repeated too quickly" - service

Trying to start a service to run gunicorn as backend server for Flask, not working. Running nginx as frontend server for React, working.
Server:
Virtualization: vmware
Operating System: Red Hat Enterprise Linux 8.4 (Ootpa)
CPE OS Name: cpe:/o:redhat:enterprise_linux:8.4:GA
Kernel: Linux 4.18.0-305.3.1.el8_4.x86_64
Architecture: x86-64
Service file in /etc/systemd/system/myservice.service:
[Unit]
Description="Description"
After=network.target
[Service]
User=root
Group=root
WorkingDirectory=/home/project/app/api
ExecStart=/home/project/app/api/venv/bin/gunicorn -b 127.0.0.1:5000 api:app
Restart=always
[Install]
WantedBy=multi-user.target
/app/api:
-rwxr-xr-x. 1 root root 2018 Jun 9 20:06 api.py
drwxrwxr-x+ 5 root root 100 Jun 7 10:11 venv
Error message:
● myservice.service - "Description"
Loaded: loaded (/etc/systemd/system/myservice.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2021-06-10 19:01:01 CEST; 5s ago
Process: 18307 ExecStart=/home/project/app/api/venv/bin/gunicorn -b 127.0.0.1:5000 api:app (code=exited, status=203/EXEC)
Main PID: 18307 (code=exited, status=203/EXEC)
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Service RestartSec=100ms expired, scheduling restart.
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Scheduled restart job, restart counter is at 5.
Jun 10 19:01:01 xxxx systemd[1]: Stopped "Description".
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Start request repeated too quickly.
Jun 10 19:01:01 xxxx systemd[1]: myservice.service: Failed with result 'exit-code'.
Jun 10 19:01:01 xxxx systemd[1]: Failed to start "Description".
Tried, not working:
Adding Environment="PATH=/home/project/app/api/venv/bin" under [Service]
$ systemctl reset-failed myservice.service
$ systemctl daemon-reload
Reboot, ofc.
Tried, working:
Running (as root) /home/project/app/api/venv/bin/gunicorn -b 127.0.0.1:5000 api:app while in /app/api directory
Does anyone know how to fix this problem?

Typically enough, I figured it out shortly after posting this issue.
SELinux is messing with permissions for files and directories, so for anyone experiencing the same issue, make sure to test with the following alterings (as root):
$ setsebool -P httpd_can_network_connect on
$ chcon -Rt httpd_sys_content_t /path/to/your/Flask/dir
In my case: $ chcon -Rt httpd_sys_content_t /home/project/app/api
While this is NOT a permanent fix, it's worth a try. Check out the SELinux docs for more permanent solutions.

Related

systemd service not starting on boot, starts when i restart it

I have made this service file to start a python script when my raspberry pi (4) boots up:
/etc/systemd/system/plants.service
[Unit]
Description=plant-sender
After=network.target
[Service]
Type=simple
User=root
Group=root
WorkingDirectory=/home/theo/Repos/plants-monitor/remote
ExecStart=/usr/bin/python main.py
Restart=on-failure
[Install]
WantedBy=multi-user.target
However, once the pi is on, I run sudo systemctl status plants, and get:
* plants.service - plant-sender
Loaded: loaded (/etc/systemd/system/plants.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2020-03-30 20:22:43 EDT; 1min 45s ago
Process: 323 ExecStart=/usr/bin/python main.py (code=exited, status=1/FAILURE)
Main PID: 323 (code=exited, status=1/FAILURE)
Mar 30 20:22:43 arpi systemd[1]: plants.service: Scheduled restart job, restart counter is at 5.
Mar 30 20:22:43 arpi systemd[1]: Stopped plant-sender.
Mar 30 20:22:43 arpi systemd[1]: plants.service: Start request repeated too quickly.
Mar 30 20:22:43 arpi systemd[1]: plants.service: Failed with result 'exit-code'.
Mar 30 20:22:43 arpi systemd[1]: Failed to start plant-sender.
But, after running sudo systemctl restart plants, the service starts up and everything is fine.
If it doesn't start on boot but does on systemctl restart, I'd be looking at whether /home/theo/Repos/plants-monitor/remote is mounted at that point.
There may be something automounting or home-mounting your home directory when you log in.
If so, you could change the working directory to something that exists always, even if only a test.
Additionally, using journalctl -n 9999 -u plants will get you more log messages, so you can see why it's failing, rather than just seeing the "tried too many times, giving up" messages.

Filebeat Service will not start on RHEL 7

I have a trouble/problem with my Filebeat installation.
When I try it to start with "service filebeat start", it says "Starting Filebeat". After "service filebeat status" I get 4 PIDs (until here everything looks "normal"):
[root#(Server) run]# service filebeat status
Filebeat is running with pid: 30650 30657 30658 30659
But after checking the PID, we see that it is not running:
[root#(Server) run]# ps -ef | grep 30650
root 30665 31360 0 16:27 pts/0 00:00:00 grep --color=auto 30650
Trying to start it with systemctl doesn't help:
[root#(Server) run]# systemctl start filebeat
Job for filebeat.service failed because a configured resource limit was exceeded. See "systemctl status filebeat.service" and "journalctl -xe" for details.
Status says:
[root#Server run]# systemctl status filebeat
● filebeat.service - LSB: start and stop filebeat
Loaded: loaded (/etc/rc.d/init.d/filebeat; bad; vendor preset: disabled)
Active: failed (Result: resources) since Tue 2017-09-26 16:30:33 CEST; 1min 41s ago
Docs: man:systemd-sysv-generator(8)
Process: 32118 ExecStart=/etc/rc.d/init.d/filebeat start (code=exited, status=0/SUCCESS)
Sep 26 16:30:33 Server... systemd[1]: Starting LSB: start and stop filebeat...
Sep 26 16:30:33 Server... filebeat[32118]: Starting Filebeat
Sep 26 16:30:33 Server... su[32119]: (to user) root on none
Sep 26 16:30:33 Server... systemd[1]: PID file /var/run/filebeat.pid not readable (yet?) after start.
Sep 26 16:30:33 Server... systemd[1]: Failed to start LSB: start and stop filebeat.
Sep 26 16:30:33 Server... systemd[1]: Unit filebeat.service entered failed state.
Sep 26 16:30:33 Server... systemd[1]: filebeat.service failed.
Does somebody has any idea?
Regards
Problem was "chown permissions". I installed filebeat not as root and the "data" directory had root user & group ownership. After changing that, it runs and starts automatically after boot.
Regards

Integrating New Relic with Celery

I am trying to configure New Relic to work with Celery. I am working on a Django application hosted on Amazon EC2 w/ CentOS 7.
I thought all I needed to do to configure New Relic for celery was to edit the following line in /etc/systemd/system/celery.service

:
ExecStart=/home/myuser/project/venv/bin/celery -A project worker -l info -c 4

and change it to:
ExecStart=/home/myuser/project/newrelic.ini newrelic-admin run-program celery -A project worker -l info -c 4
But I see the following errors:
[root#ip-172-31-60-222 system]# systemctl daemon-reload
[root#ip-172-31-60-222 system]# systemctl restart celery
[root#ip-172-31-60-222 system]# systemctl status celery.service -l
● celery.service - datasidekick celery service
Loaded: loaded (/etc/systemd/system/celery.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Wed 2017-03-01 04:16:33 UTC; 900ms ago
Process: 22969 ExecStart=/home/datasidekick/datasidekick/newrelic.ini newrelic-admin run-program /home/datasidekick/datasidekick/venv/bin/celery -A datasidekick worker -l info -c 4 (code=exited, status=203/EXEC)
Main PID: 22969 (code=exited, status=203/EXEC)
Mar 01 04:16:33 ip-172-31-60-222.ec2.internal systemd[1]: Unit celery.service entered failed state.
Mar 01 04:16:33 ip-172-31-60-222.ec2.internal systemd[1]: celery.service failed.
Mar 01 04:16:33 ip-172-31-60-222.ec2.internal systemd[1]: celery.service holdoff time over, scheduling restart.
Mar 01 04:16:33 ip-172-31-60-222.ec2.internal systemd[1]: start request repeated too quickly for celery.service
Mar 01 04:16:33 ip-172-31-60-222.ec2.internal systemd[1]: Failed to start datasidekick celery service.
Mar 01 04:16:33 ip-172-31-60-222.ec2.internal systemd[1]: Unit celery.service entered failed state.
Mar 01 04:16:33 ip-172-31-60-222.ec2.internal systemd[1]: celery.service failed.
I'm not sure what I'm doing wrong. Any help is greatly appreciated!
It's been a while since you asked the question but the reason of the fail is probably the wrong ExecStart command.
You use /home/myuser/project/newrelic.ini newrelic-admin run-program celery -A project worker -l info -c 4 as the command to start the service. The first part of the command will try to execute /home/myuser/project/newrelic.ini, which is a text file, and fail with Permission denied error as text files don't have exec permission by default. Or it will cause a bash syntax error somewhere and fail as well.
Insead, use
Environment="NEW_RELIC_CONFIG_FILE=/home/myuser/project/newrelic.ini"
ExecStart=newrelic-admin run-program celery -A project worker -l info -c 4
run journalctl -u celery -fand paste logs here
also have look at this links
https://docs.newrelic.com/docs/agents/python-agent/back-end-services/python-agent-celery
https://discuss.newrelic.com/t/need-some-help-setting-up-new-relic-with-our-celery-workers/30181

RHEL7 systemd start mongo services automatically?

I have a RHEL7 server that is part of a Mongo cluster. There are three mongo processes that I would like to be automatically started on system boot. One mongod, one arbiter and one mongos:
/usr/bin/mongod -f /etc/mongo_shard001.conf
/usr/bin/mongod -f /etc/mongoarb.conf
/usr/bin/mongos -f /etc/mongos.conf
I have been trying to create systemd services for these commands i.e
[Unit]
Description=mongo configuration server
After=network.target
[Service]
User=mongod
Group=mongod
ExecStart=/usr/bin/mongod -f /etc/mongoconf.conf
[Install]
WantedBy=multi-user.target
When I try to do sudo systemctl daemon-reload && sudo systemctl start mongoconf, I get this error
● mongoconf.service - mongo configuration server
Loaded: loaded (/etc/systemd/system/mongoconf.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2017-02-02 14:38:34 AWST; 20s ago
Process: 5114 ExecStart=/usr/bin/mongod -f /etc/mongoconf.conf (code=exited, status=1/FAILURE)
Main PID: 5114 (code=exited, status=1/FAILURE)
Feb 02 14:38:34 mdb1 systemd[1]: Started mongo configuration server.
Feb 02 14:38:34 mdb1 systemd[1]: Starting mongo configuration server...
Feb 02 14:38:34 mdb1 systemd[1]: mongoconf.service: main process exited, code=exited, status=1/FAILURE
Feb 02 14:38:34 mdb1 systemd[1]: Unit mongoconf.service entered failed state.
Feb 02 14:38:34 mdb1 systemd[1]: mongoconf.service failed.
I have also tried using a forked type with pid file:
[Unit]
Description=mongo configuration server
After=network.target
[Service]
User=mongod
Group=mongod
ExecStart=/usr/bin/mongod -f /etc/mongoconf.conf --pidfilepath /var/lib/mongoconf/pid --fork
Type=forking
PIDFile=/var/run/mongodb/mongoconf/pid
[Install]
WantedBy=multi-user.target
But gives this error
● mongoconf.service - mongo configuration server
Loaded: loaded (/etc/systemd/system/mongoconf.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2017-02-02 14:45:36 AWST; 4s ago
Process: 5256 ExecStart=/usr/bin/mongod -f /etc/mongoconf.conf --pidfilepath /var/lib/mongoconf/pid --fork (code=exited, status=1/FAILURE)
Main PID: 5114 (code=exited, status=1/FAILURE)
Feb 02 14:45:36 mdb1 systemd[1]: Starting mongo configuration server...
Feb 02 14:45:36 mdb1 mongod[5256]: about to fork child process, waiting until server is ready for connections.
Feb 02 14:45:36 mdb1 mongod[5256]: forked process: 5258
Feb 02 14:45:36 mdb1 systemd[1]: mongoconf.service: control process exited, code=exited status=1
Feb 02 14:45:36 mdb1 systemd[1]: Failed to start mongo configuration server.
Feb 02 14:45:36 mdb1 systemd[1]: Unit mongoconf.service entered failed state.
Feb 02 14:45:36 mdb1 systemd[1]: mongoconf.service failed.
Starting the mongo config manually works fine and creates the pid file
/usr/bin/mongod -f /etc/mongoconf.conf --pidfilepath /var/lib/mongoconf/pid --fork
The version of mongod I am using is the one from mongodb.com, and I installed it following their install guide.
db version v3.4.1
git version: 5e103c4f5583e2566a45d740225dc250baacfbd7
OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
allocator: tcmalloc
modules: none
build environment:
distmod: rhel70
distarch: x86_64
target_arch: x86_64
from this repo
[mongodb-org-3.4]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.4/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc
I am wondering if I am going about this the wrong way, is there a better way to do this?
I know you said rhel7 but since it's the only answer coming up on duckduckgo for this question, this can be useful. Under Ubuntu 15 and up:
sudo systemctl enable mongod.service
Here is my solution
make a bash script with these lines
/usr/bin/mongod -f /etc/mongo_shard001.conf
/usr/bin/mongod -f /etc/mongoarb.conf
/usr/bin/mongos -f /etc/mongos.conf
and then add this line to your crontab
#reboot root cd /foldername && ./scriptname.sh
systemd would be a better solution, if anyone knows how to set it up.
the mongo documentation is no help

celery daemon using systemd not running

i am using ubuntu 16 and systemd for running celery as a daemon.i have created the unit file also but i am not able to run celery as a service. why is this error?
/etc/systemd/system/celery.service
[Unit]
Description=Celery Service
After=network.target
[Service]
Type=forking
User=celery
Group=celery
EnvironmentFile=-/etc/default/celery
WorkingDirectory=/srv/weaver/src
ExecStart=/bin/sh -c '${CELERY_BIN} multi start ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
ExecStop=/bin/sh -c '${CELERY_BIN} multi stopwait ${CELERYD_NODES} \
--pidfile=${CELERYD_PID_FILE}'
ExecReload=/bin/sh -c '${CELERY_BIN} multi restart ${CELERYD_NODES} \
-A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} \
--logfile=${CELERYD_LOG_FILE} --loglevel=${CELERYD_LOG_LEVEL} ${CELERYD_OPTS}'
[Install]
WantedBy=multi-user.target
file at /etc/default/celery
ENABLED="true"
CELERYD_NODES="worker1"
#CELERYD_NODES="worker1 worker2 worker3"
CELERY_BIN="/usr/local/bin/celery"
CELERY_APP="main:celery_app"
CELERYD_CHDIR="/srv/weaver/src"
CELERYD_OPTS=" --queue=weaver --time-limit=100000 --concurrency=2"
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery2/%N.pid"
CELERYD_USER="celery"
CELERYD_GROUP="celery"
CELERY_CREATE_DIRS=1
# Change Celery Beat
CELERYBEAT_CHDIR="/srv/weaver/src"
# Log files
CELERYBEAT_LOG_FILE="/var/log/celery/celerybeat.log"
# Celery Beat Log files
CELERYBEAT_PID_FILE="/var/run/celery/celerybeat.pid"
# Scheduler for celery
CELERYBEAT_OPTS=" --pidfile=/var/run/celery/celerybeat.pid --sch
OUTPUT OF RUNNING SERVICE
● celery.service - Celery Service
Loaded: loaded (/etc/systemd/system/celery.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2017-01-12 17:12:32 IST; 2min 17s ago
Process: 18561 ExecStop=/bin/sh -c ${CELERY_BIN} multi stopwait ${CELERYD_NODES} --pidfile=${CELERYD_PID_FILE} (code=exited, status=0/SUCCESS)
Process: 18540 ExecStart=/bin/sh -c ${CELERY_BIN} multi start ${CELERYD_NODES} -A ${CELERY_APP} --pidfile=${CELERYD_PID_FILE} --logfile=${CELERYD_LOG_FIL
Main PID: 18555 (code=exited, status=1/FAILURE)
Jan 12 17:12:30 fb01 systemd[1]: Starting Celery Service...
Jan 12 17:12:31 fb01 sh[18540]: celery multi v3.1.23 (Cipater)
Jan 12 17:12:31 fb01 sh[18540]: > Starting nodes...
Jan 12 17:12:31 fb01 sh[18540]: > worker1#fb01: OK
Jan 12 17:12:31 fb01 systemd[1]: Started Celery Service.
Jan 12 17:12:32 fb01 systemd[1]: celery.service: Main process exited, code=exited, status=1/FAILURE
Jan 12 17:12:32 fb01 sh[18561]: celery multi v3.1.23 (Cipater)
Jan 12 17:12:32 fb01 sh[18561]: > worker1#fb01: DOWN
Jan 12 17:12:32 fb01 systemd[1]: celery.service: Unit entered failed state.
Jan 12 17:12:32 fb01 systemd[1]: celery.service: Failed with result 'exit-code'.
I just hit this exact issue. My problem was a configuration issue. In particular, I wasn't setting
CELERYD_LOG_LEVEL
in my environment file. (/etc/default/celeryd in your case). It looks like you have made the same mistake.
(I also had a few other configuration issues that I needed to resolve. I discovered these by running celery on the commandline.)
I had this same symptom, turned out to be permissions.
In my case something like:
chmod o+x /srv/weaver/src
sorted it.
Note: That this is not the best way to enable the required permission but that's not pertinent to this answer.