I have one upstart template in chef-Cookbook and want to convert it into Systemd so that it can be supported in 16.04.
I have already converted but faced the issue as my server is not starting properly.
Below is the upstart script -
#!upstart
description "Server nodejs"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on [!12345]
console log
setuid root
setgid www-data
chdir /srv/
exec /usr/local/bin/node /srv/my_service/src/cli/index.js >>/var/log/my_service/my_service_nodejs.log 2>&1
Conversion of same in Systemd is -
[Unit]
Description=Server nodejs
After=network.target
[Service]
User=root
Group=www-data
WorkingDirectory=/srv/
ExecStart=/usr/local/bin/node /srv/my_service/src/cli/index.js >>/var/log/my_service/my_service_nodejs.log 2>&1
[Install]
WantedBy=multi-user.target
Issues I am facing -
Node js Server is not Running
my_nodejs.service - Server nodejs
Loaded: loaded (/etc/systemd/system/my_nodejs.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2017-12-28 08:01:14 UTC; 6s ago
Main PID: 5842 (code=exited, status=64)
systemd[1]: my_nodejs.service: Main process exited, code=exited, status=64/n/a
systemd[1]: my_nodejs.service: Unit entered failed state.
systemd[1]: my_nodejs.service: Failed with result 'exit-code'.
Found the issue.
It is because of the >> which I added for appending log. >> is considered an Operator in Systemd
Related
I want to register a python script as a daemon service, executed at system startup and running continuously in the background. The script opens network sockets, a local log file and executes a number of threads. The script is well-formed and runs without any compilation or runtime issues.
I used below service file for registration:
[Unit]
Description=ModBus2KNX Gateway Daemon
After=multi-user.target
[Service]
Type=simple
ExecStart=/usr/bin/python3 /usr/bin/ModBusDaemon.py
[Install]
WantedBy=multi-user.target
Starting the service results in below error:
● ModBusDaemon.service - ModBus2KNX Gateway Daemon
Loaded: loaded (/lib/systemd/system/ModBusDaemon.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2021-01-04 21:46:29 CET; 6min ago
Process: 1390 ExecStart=/usr/bin/python3 /usr/bin/ModBusDaemon.py (code=exited, status=1/FAILURE)
Main PID: 1390 (code=exited, status=1/FAILURE)
Jan 04 21:46:29 raspberrypi systemd[1]: Started ModBus2KNX Gateway Daemon.
Jan 04 21:46:29 raspberrypi systemd[1]: ModBusDaemon.service: Main process exited, code=exited, status=1/FAILURE
Jan 04 21:46:29 raspberrypi systemd[1]: ModBusDaemon.service: Failed with result 'exit-code'.
Appreciate your support!
Related posts brought me to the resolution for my issue. Ubuntu systemd custom service failing with python script refers to the same issue. The proposed solution adding the WorkingDirectory to the Service section resolved the issue for me. Though, I could not find the adequate systemd documentation outlining on the implicit dependency.
As MBizm saim you must also add WorkingDirectory.
And After that you must also run these commands:
sudo systemctl daemon-reload
sudo systemctl enable your_service.service
sudo systemctl start your_service.service
I have made this service file to start a python script when my raspberry pi (4) boots up:
/etc/systemd/system/plants.service
[Unit]
Description=plant-sender
After=network.target
[Service]
Type=simple
User=root
Group=root
WorkingDirectory=/home/theo/Repos/plants-monitor/remote
ExecStart=/usr/bin/python main.py
Restart=on-failure
[Install]
WantedBy=multi-user.target
However, once the pi is on, I run sudo systemctl status plants, and get:
* plants.service - plant-sender
Loaded: loaded (/etc/systemd/system/plants.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2020-03-30 20:22:43 EDT; 1min 45s ago
Process: 323 ExecStart=/usr/bin/python main.py (code=exited, status=1/FAILURE)
Main PID: 323 (code=exited, status=1/FAILURE)
Mar 30 20:22:43 arpi systemd[1]: plants.service: Scheduled restart job, restart counter is at 5.
Mar 30 20:22:43 arpi systemd[1]: Stopped plant-sender.
Mar 30 20:22:43 arpi systemd[1]: plants.service: Start request repeated too quickly.
Mar 30 20:22:43 arpi systemd[1]: plants.service: Failed with result 'exit-code'.
Mar 30 20:22:43 arpi systemd[1]: Failed to start plant-sender.
But, after running sudo systemctl restart plants, the service starts up and everything is fine.
If it doesn't start on boot but does on systemctl restart, I'd be looking at whether /home/theo/Repos/plants-monitor/remote is mounted at that point.
There may be something automounting or home-mounting your home directory when you log in.
If so, you could change the working directory to something that exists always, even if only a test.
Additionally, using journalctl -n 9999 -u plants will get you more log messages, so you can see why it's failing, rather than just seeing the "tried too many times, giving up" messages.
command mongo gives the interactive shell on command but when this command is executed as sudo systemctl enable mongod.service, it gives following output.
Failed to enable unit: Unit file mongod.service does not exist.
I am using ubuntu 17 machine.
also sudo systemctl start mongodb doesn't give any output.
on running sudo service mongodb status
it gives output as
● mongodb.service - An object/document-oriented database
Loaded: loaded (/lib/systemd/system/mongodb.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2018-02-15 23:08:02 IST; 1min 37s ago
Docs: man:mongod(1)
Process: 19137 ExecStart=/usr/bin/mongod --unixSocketPrefix=${SOCKETPATH} --config ${CONF} $DAEMON_OPTS (code=exited, status=100)
Main PID: 19137 (code=exited, status=100)
Feb 15 23:08:02 gd systemd[1]: Started An object/document-oriented database.
Feb 15 23:08:02 gd systemd[1]: mongodb.service: Main process exited, code=exited, status=100/n/a
Feb 15 23:08:02 gd systemd[1]: mongodb.service: Unit entered failed state.
Feb 15 23:08:02 gd systemd[1]: mongodb.service: Failed with result 'exit-code'.
how can I solve this problem?
Your mongo service is called mongodb, pretty sure it should called mongod like you tried! Similar question here.
If your service is called mongodb you could so try:
sudo systemctl enable mongodb.service
When starting a service it wont give you any output and just returns back to the prompt. You can use systemctl status like you’ve done or you look st journalctl
If you’re using systemctl then you can check the service status by running:
sudo systemctl status mongodb.service
I have the folowing config:
[Unit]
Description=Example .NET Web API Application running on CentOS 7
[Service]
WorkingDirectory=/var/www/FEEDER
ExecStart=/usr/bin/dotnet /var/www/FEEDER/FeedService.MVC.dll
SyslogIdentifier=dotnet-example
User=www-data
Environment=ASPNETCORE_ENVIRONMENT=Development
[Install]
WantedBy=multi-user.target
when I start the prgram I dont get any error.
the status throw this:
sudo ystemctl status kestrel-hellomvc.service
● kestrel-hellomvc.service - Example .NET Web API Application running on CentOS 7
Loaded: loaded (/etc/systemd/system/kestrel-hellomvc.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since ג' 2017-10-31 09:26:20 IST; 54min ago
Main PID: 20077 (code=exited, status=131/n/a)
אוק 31 09:26:20 avi-VirtualBox systemd[1]: Stopped Example .NET Web API Application running on CentOS 7.
אוק 31 09:26:20 avi-VirtualBox systemd[1]: Started Example .NET Web API Application running on CentOS 7.
אוק 31 09:26:20 avi-VirtualBox systemd[1]: kestrel-hellomvc.service: Main process exited, code=exited, status=131/n/a
אוק 31 09:26:20 avi-VirtualBox systemd[1]: kestrel-hellomvc.service: Unit entered failed state.
אוק 31 09:26:20 avi-VirtualBox systemd[1]: kestrel-hellomvc.service: Failed with result 'exit-code'.
Warning: kestrel-hellomvc.service changed on disk. Run 'systemctl daemon-reload' to reload units.
the owner of the folder is www-data and the permissions are 0755
What might be the problem?
Thanks
you should deploy your app myapp with:
sudo cp -a /home/user/myapp/bin/Debug/netcoreapp2.0/publish/* /var/aspnetcore/myapp
eg.
Copy all files from build folder to deployment location (*.json etc)
I created a service script named "puma.service" in /etc/systemd/system/ with the following contents:
[Unit]
Description=Puma HTTP Server
After=network.target
[Service]
Type=simple
User=ubuntu
WorkingDirectory=/home/username/appdir/current
ExecStart=/bin/bash -lc "/home/username/appdir/current/sbin/puma -C /home/username/appdir/current/config/puma.rb /home/username/appdir/current/config.ru"
Restart=always
[Install]
WantedBy=multi-user.target
I enabled the service and when started, I'm getting the following log from systemctl:
● puma.service - Puma HTTP Server
Loaded: loaded (/etc/systemd/system/puma.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Wed 2016-12-14 10:09:46 UTC; 12min ago
Process: 16889 ExecStart=/bin/bash -lc cd /home/username/appdir/current && bundle exec puma -C /home/username/appdir..
Main PID: 16889 (code=exited, status=127)
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: puma.service: Main process exited, code=exited, status=127/n/a
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: puma.service: Unit entered failed state.
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: puma.service: Failed with result 'exit-code'.
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: puma.service: Service hold-off time over, scheduling restart.
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: Stopped Puma HTTP Server.
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: puma.service: Start request repeated too quickly.
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: Failed to start Puma HTTP Server.
Although, when I give the command in SSH terminal the server started and was running perfect. Is there any changes I have to make in the service file?
Note:
I have changed the dirnames for your convenience.
I did some research and the cause of status 127 is due to the executable not in the path. But, I guess that won't be a problem.
Can you shed some light?
I found the problem and changed the ExecStart as mentioned below and it worked like a charm:
ExecStart=/home/username/.rbenv/shims/bundle exec puma -e production -C ./config/puma.rb config.ru
PIDFile=/home/username/appdir/shared/tmp/pids/puma.pid
bundle should be taken from the rbenv shims and also the puma's config file (config/puma.rb) and application's config file (config.ru) can be given in relative path.
One way to solve it is to specify a PID file and systemd will take a look at that file to check on service status.
Here's how we use this in our scripts (adapted to your given sample)
ExecStart=/bin/bash -lc '/home/username/appdir/current/sbin/puma -C /home/username/appdir/current/config/puma.rb /home/username/appdir/current/config.ru --pidfile /home/username/appdir/current/tmp/pids/puma.pid'
PIDFile=/home/username/appdir/current/tmp/pids/puma.pid
Take note that you might have to configure --pidfile via your -C puma.rb file instead of passing it an as parameter. I'm just showing it here to illustrate that --pidfile (in puma config) should be the same as PIDFile in the service file.
As for why the error message is that way, I'm not sure myself and is interested in the answer too.
For rvm users try with
[Unit]
Description=Puma HTTP Server
After=network.target
[Service]
Type=simple
User=user_name
Group=user_name
WorkingDirectory=/home/user_name/apps/app_name/current
Environment=RAILS_ENV=production
ExecStart=/home/user_name/.rvm/bin/rvm ruby-3.1.2#app_name do bundle exec --keep-file-descriptors puma -C /home/user_name/apps/app_name/current/config/puma/production.rb
ExecReload=/bin/kill -USR1 $MAINPID
StandardOutput=append:/home/user_name/apps/app_name/current/log/puma_access.log
StandardError=append:/home/user_name/apps/app_name/current/log/puma_error.log
SyslogIdentifier=app_name-puma
Restart=always
KillMode=process
[Install]
WantedBy=multi-user.target
if you are not using a gemset change
ExecStart=/home/user_name/.rvm/bin/rvm ruby-3.1.2#app_name ....
to
ExecStart=/home/user_name/.rvm/bin/rvm default