Upstart to Systemd - ubuntu-16.04

We have a .conf file named private-api.conf in the location /etc/init ,
So my file location is - /etc/init/private-api.conf
The contents of my file is as following -
# start when server starts
start on runlevel [23456]
# Stop when server shuts down/reboots
stop on shutdown
#Respawn the process if it crashes
#If it respawns more than 10 times in 5 seconds stop
respawn
respawn limit 10 5
#expect fork
script
cd /home/ubuntu/private-api && exec java -jar -Dspring.profiles.active=stage private-api-SNAPSHOT.jar > private-api.log 2>&1
end script
The next Command I need to fire is -
sudo initctl reload-configuration
After which I am supposed to run the Service, using the following command -
service private-api start/stop/restart/status
I get the following error when I do -
initctl: Unable to connect to Upstart: Failed to connect to socket /com/ubuntu/upstart: Connection refused
I found that in Ubuntu 16.04 upstarts don't work and have moved to Systemd now,
And that the Systemd file needs to be in the location - /etc/systemd/system, with the file extension .service
After which to run the Systemd following 2 commands need to be fired -
1. sudo systemctl daemon-reload
2. sudo systemctl start xyz.service
I am referring the following links -
1. https://wiki.ubuntu.com/SystemdForUpstartUsers
2. https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files
Here's what I've achieved so far using the above refs -
[Unit]
Description=Upstart for Private API
After=network.target network-online.target
Wants=network-online.target
[Service]
User=root
WorkingDirectory=/home/ubuntu/private-api
ExecStart=/usr/bin/java -classpath home/ubuntu/private-api/private-api-0.0.1-SNAPSHOT.jar -Dspring.profiles.active=stage > home/ubuntu/private-api/private-api.log 2>&1
SuccessExitStatus=143
Restart=on-failure
RestartSec=120s
[Install]
WantedBy=multi-user.target
I reloaded my Systemd service -
sudo systemctl daemon-reload
But when I check service status, I get absolute path errors -
sudo systemctl status private-api.service
Apr 05 08:48:56 ip-10-10-1-153 systemd[1]: [/etc/systemd/system/private-api.service:9] Executable path is not absolute, ignoring: ExecStart=/usr/bin/java -classpath /home/ubuntu/private/astro-private-0.0.1-SNAPSHOT.jar -Dspring.profiles.active=dev > private.log 2>&1
Apr 05 08:48:56 ip-10-10-1-153 systemd[1]: private-api.service: Service lacks both ExecStart= and ExecStop= setting. Refusing.
Apr 05 08:49:07 ip-10-10-1-153 systemd[1]: [/etc/systemd/system/private-api.service:9] Executable path is not absolute, ignoring: ExecStart=/usr/bin/java -classpath /home/ubuntu/private/astro-private-0.0.1-SNAPSHOT.jar -Dspring.profiles.active=dev > private.log 2>&1
Apr 05 08:49:07 ip-10-10-1-153 systemd[1]: private-api.service: Service lacks both ExecStart= and ExecStop= setting. Refusing.
Apr 05 08:51:40 ip-10-10-1-153 systemd[1]: [/etc/systemd/system/private-api.service:9] Executable path is not absolute, ignoring: ExecStart=/usr/bin/java -classpath /home/ubuntu/private/astro-private-0.0.1-SNAPSHOT.jar -Dspring.profiles.active=dev > /home/ubuntu/private/private.log 2>&1
Apr 05 08:51:40 ip-10-10-1-153 systemd[1]: private-api.service: Service lacks both ExecStart= and ExecStop= setting. Refusing.
Apr 05 09:17:41 ip-10-10-1-153 systemd[1]: [/etc/systemd/system/private-api.service:9] Executable path is not absolute, ignoring: ExecStart=/usr/bin/java -classpath /home/ubuntu/private/astro-private-0.0.1-SNAPSHOT.jar -Dspring.profiles.active=dev > /home/ubuntu/private/private.log 2>&1
Apr 05 09:17:41 ip-10-10-1-153 systemd[1]: private-api.service: Service lacks both ExecStart= and ExecStop= setting. Refusing.
Apr 05 09:17:59 ip-10-10-1-153 systemd[1]: [/etc/systemd/system/private-api.service:9] Executable path is not absolute, ignoring: ExecStart=/usr/bin/java -classpath /home/ubuntu/private/astro-private-0.0.1-SNAPSHOT.jar -Dspring.profiles.active=dev > /home/ubuntu/private/private.log 2>&1
Apr 05 09:17:59 ip-10-10-1-153 systemd[1]: private-api.service: Service lacks both ExecStart= and ExecStop= setting. Refusing.
Can someone help me convert my current upstart .conf script?

You have already set the WorkingDirectory, you can directly use the command from your script.
ExecStart=/usr/bin/java -jar -Dspring.profiles.active=stage > private-api.log 2>&1
Hopefully this works.

Related

Cannot start Zookeeper service on CentOS7

When trying to start zookeeper service I get the following
● zookeeper.service
Loaded: loaded (/etc/systemd/system/zookeeper.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2020-04-02 16:19:24 EDT; 5min ago
Process: 5201 ExecStop=/usr/local/kafka/kafka_2.13-2.4.1/bin/zookeeper-server-stop.sh (code=exited, status=1/FAILURE)
Process: 4882 ExecStart=/usr/local/kafka/kafka_2.13-2.4.1/bin/zookeeper-server-start.sh /usr/local/kafka/kafka_2.13-2.4.1/config/zookeeper.properties (code=exited, status=127)
Main PID: 4882 (code=exited, status=127)
Apr 02 16:19:24 centos.localdomain systemd[1]: Started zookeeper.service.
Apr 02 16:19:24 centos.localdomain systemd[1]: zookeeper.service: main process exited, code=exited, status=127/n/a
Apr 02 16:19:24 centos.localdomain systemd[1]: zookeeper.service: control process exited, code=exited status=1
Apr 02 16:19:24 centos.localdomain systemd[1]: Unit zookeeper.service entered failed state.
Apr 02 16:19:24 centos.localdomain systemd[1]: zookeeper.service failed.
The zookeeper.service file is configured as follows
[Unit]
Requires=network.target remote-fs.target
After=network.target remote-fs.target
[Service]
Type=simple
User=specadmin
ExecStart=/usr/local/kafka/kafka_2.13-2.4.1/bin/zookeeper-server-start.sh /usr/local/kafka/kafka_2.13-2.4.1/config/zookeeper.properties
ExecStop=/usr/local/kafka/kafka_2.13-2.4.1/bin/zookeeper-server-stop.sh
Restart=on-abnormal
[Install]
WantedBy=multi-user.target
When trying to run zookeeper manually with the same user configured in the service file everything works fine.
Please advise
Turns out the issue was related to the environment variables systemd uses.
Systemd uses a fixed $PATH variable and the changes that are made to the /etc/profile /etc/bashrc and the like are not applied to systemd.
Zookeeper runs java which needs to be part of the search path, but since systemd doesn't use the files where the search path is set, zookeeper start script couldn't find java.
I solved it by overriding the search path by adding Environment=PATH=... parameter in the zookeeper service file and adding all the required directories.

Binding to IPv6 address not available since kernel does not support IPv6

Trying to autostart mongodb , but does not work
$ sudo systemctl status mongod.service -l
● mongod.service - High-performance, schema-free document-oriented database
Loaded: error (Reason: Invalid argument)
Active: inactive (dead)
Docs: https://docs.mongodb.org/manual
Nov 03 00:28:08 xxxxx systemd[1]: mongod.service lacks both ExecStart= and ExecStop= setting. Refusing.
Nov 03 00:28:13 xxxx systemd[1]: mongod.service lacks both ExecStart= and ExecStop= setting. Refusing.
$ sudo systemd-analyze verify /usr/lib/systemd/system/mongod.service
Binding to IPv6 address not available since kernel does not support IPv6.
we had same file under /etc/systemd/system/mongod.service and /usr/lib/systemd/system/mongod.service
Removed the file /etc/systemd/system/mongod.service
And run the systemctl disable mongod
systemctl enable mongod
systemctl start mongod

Filebeat Service will not start on RHEL 7

I have a trouble/problem with my Filebeat installation.
When I try it to start with "service filebeat start", it says "Starting Filebeat". After "service filebeat status" I get 4 PIDs (until here everything looks "normal"):
[root#(Server) run]# service filebeat status
Filebeat is running with pid: 30650 30657 30658 30659
But after checking the PID, we see that it is not running:
[root#(Server) run]# ps -ef | grep 30650
root 30665 31360 0 16:27 pts/0 00:00:00 grep --color=auto 30650
Trying to start it with systemctl doesn't help:
[root#(Server) run]# systemctl start filebeat
Job for filebeat.service failed because a configured resource limit was exceeded. See "systemctl status filebeat.service" and "journalctl -xe" for details.
Status says:
[root#Server run]# systemctl status filebeat
● filebeat.service - LSB: start and stop filebeat
Loaded: loaded (/etc/rc.d/init.d/filebeat; bad; vendor preset: disabled)
Active: failed (Result: resources) since Tue 2017-09-26 16:30:33 CEST; 1min 41s ago
Docs: man:systemd-sysv-generator(8)
Process: 32118 ExecStart=/etc/rc.d/init.d/filebeat start (code=exited, status=0/SUCCESS)
Sep 26 16:30:33 Server... systemd[1]: Starting LSB: start and stop filebeat...
Sep 26 16:30:33 Server... filebeat[32118]: Starting Filebeat
Sep 26 16:30:33 Server... su[32119]: (to user) root on none
Sep 26 16:30:33 Server... systemd[1]: PID file /var/run/filebeat.pid not readable (yet?) after start.
Sep 26 16:30:33 Server... systemd[1]: Failed to start LSB: start and stop filebeat.
Sep 26 16:30:33 Server... systemd[1]: Unit filebeat.service entered failed state.
Sep 26 16:30:33 Server... systemd[1]: filebeat.service failed.
Does somebody has any idea?
Regards
Problem was "chown permissions". I installed filebeat not as root and the "data" directory had root user & group ownership. After changing that, it runs and starts automatically after boot.
Regards

RHEL7 systemd start mongo services automatically?

I have a RHEL7 server that is part of a Mongo cluster. There are three mongo processes that I would like to be automatically started on system boot. One mongod, one arbiter and one mongos:
/usr/bin/mongod -f /etc/mongo_shard001.conf
/usr/bin/mongod -f /etc/mongoarb.conf
/usr/bin/mongos -f /etc/mongos.conf
I have been trying to create systemd services for these commands i.e
[Unit]
Description=mongo configuration server
After=network.target
[Service]
User=mongod
Group=mongod
ExecStart=/usr/bin/mongod -f /etc/mongoconf.conf
[Install]
WantedBy=multi-user.target
When I try to do sudo systemctl daemon-reload && sudo systemctl start mongoconf, I get this error
● mongoconf.service - mongo configuration server
Loaded: loaded (/etc/systemd/system/mongoconf.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2017-02-02 14:38:34 AWST; 20s ago
Process: 5114 ExecStart=/usr/bin/mongod -f /etc/mongoconf.conf (code=exited, status=1/FAILURE)
Main PID: 5114 (code=exited, status=1/FAILURE)
Feb 02 14:38:34 mdb1 systemd[1]: Started mongo configuration server.
Feb 02 14:38:34 mdb1 systemd[1]: Starting mongo configuration server...
Feb 02 14:38:34 mdb1 systemd[1]: mongoconf.service: main process exited, code=exited, status=1/FAILURE
Feb 02 14:38:34 mdb1 systemd[1]: Unit mongoconf.service entered failed state.
Feb 02 14:38:34 mdb1 systemd[1]: mongoconf.service failed.
I have also tried using a forked type with pid file:
[Unit]
Description=mongo configuration server
After=network.target
[Service]
User=mongod
Group=mongod
ExecStart=/usr/bin/mongod -f /etc/mongoconf.conf --pidfilepath /var/lib/mongoconf/pid --fork
Type=forking
PIDFile=/var/run/mongodb/mongoconf/pid
[Install]
WantedBy=multi-user.target
But gives this error
● mongoconf.service - mongo configuration server
Loaded: loaded (/etc/systemd/system/mongoconf.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2017-02-02 14:45:36 AWST; 4s ago
Process: 5256 ExecStart=/usr/bin/mongod -f /etc/mongoconf.conf --pidfilepath /var/lib/mongoconf/pid --fork (code=exited, status=1/FAILURE)
Main PID: 5114 (code=exited, status=1/FAILURE)
Feb 02 14:45:36 mdb1 systemd[1]: Starting mongo configuration server...
Feb 02 14:45:36 mdb1 mongod[5256]: about to fork child process, waiting until server is ready for connections.
Feb 02 14:45:36 mdb1 mongod[5256]: forked process: 5258
Feb 02 14:45:36 mdb1 systemd[1]: mongoconf.service: control process exited, code=exited status=1
Feb 02 14:45:36 mdb1 systemd[1]: Failed to start mongo configuration server.
Feb 02 14:45:36 mdb1 systemd[1]: Unit mongoconf.service entered failed state.
Feb 02 14:45:36 mdb1 systemd[1]: mongoconf.service failed.
Starting the mongo config manually works fine and creates the pid file
/usr/bin/mongod -f /etc/mongoconf.conf --pidfilepath /var/lib/mongoconf/pid --fork
The version of mongod I am using is the one from mongodb.com, and I installed it following their install guide.
db version v3.4.1
git version: 5e103c4f5583e2566a45d740225dc250baacfbd7
OpenSSL version: OpenSSL 1.0.1e-fips 11 Feb 2013
allocator: tcmalloc
modules: none
build environment:
distmod: rhel70
distarch: x86_64
target_arch: x86_64
from this repo
[mongodb-org-3.4]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/3.4/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-3.4.asc
I am wondering if I am going about this the wrong way, is there a better way to do this?
I know you said rhel7 but since it's the only answer coming up on duckduckgo for this question, this can be useful. Under Ubuntu 15 and up:
sudo systemctl enable mongod.service
Here is my solution
make a bash script with these lines
/usr/bin/mongod -f /etc/mongo_shard001.conf
/usr/bin/mongod -f /etc/mongoarb.conf
/usr/bin/mongos -f /etc/mongos.conf
and then add this line to your crontab
#reboot root cd /foldername && ./scriptname.sh
systemd would be a better solution, if anyone knows how to set it up.
the mongo documentation is no help

Cannot create systemd script for Puma

I created a service script named "puma.service" in /etc/systemd/system/ with the following contents:
[Unit]
Description=Puma HTTP Server
After=network.target
[Service]
Type=simple
User=ubuntu
WorkingDirectory=/home/username/appdir/current
ExecStart=/bin/bash -lc "/home/username/appdir/current/sbin/puma -C /home/username/appdir/current/config/puma.rb /home/username/appdir/current/config.ru"
Restart=always
[Install]
WantedBy=multi-user.target
I enabled the service and when started, I'm getting the following log from systemctl:
● puma.service - Puma HTTP Server
Loaded: loaded (/etc/systemd/system/puma.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Wed 2016-12-14 10:09:46 UTC; 12min ago
Process: 16889 ExecStart=/bin/bash -lc cd /home/username/appdir/current && bundle exec puma -C /home/username/appdir..
Main PID: 16889 (code=exited, status=127)
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: puma.service: Main process exited, code=exited, status=127/n/a
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: puma.service: Unit entered failed state.
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: puma.service: Failed with result 'exit-code'.
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: puma.service: Service hold-off time over, scheduling restart.
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: Stopped Puma HTTP Server.
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: puma.service: Start request repeated too quickly.
Dec 14 10:09:46 ip-172-31-29-40 systemd[1]: Failed to start Puma HTTP Server.
Although, when I give the command in SSH terminal the server started and was running perfect. Is there any changes I have to make in the service file?
Note:
I have changed the dirnames for your convenience.
I did some research and the cause of status 127 is due to the executable not in the path. But, I guess that won't be a problem.
Can you shed some light?
I found the problem and changed the ExecStart as mentioned below and it worked like a charm:
ExecStart=/home/username/.rbenv/shims/bundle exec puma -e production -C ./config/puma.rb config.ru
PIDFile=/home/username/appdir/shared/tmp/pids/puma.pid
bundle should be taken from the rbenv shims and also the puma's config file (config/puma.rb) and application's config file (config.ru) can be given in relative path.
One way to solve it is to specify a PID file and systemd will take a look at that file to check on service status.
Here's how we use this in our scripts (adapted to your given sample)
ExecStart=/bin/bash -lc '/home/username/appdir/current/sbin/puma -C /home/username/appdir/current/config/puma.rb /home/username/appdir/current/config.ru --pidfile /home/username/appdir/current/tmp/pids/puma.pid'
PIDFile=/home/username/appdir/current/tmp/pids/puma.pid
Take note that you might have to configure --pidfile via your -C puma.rb file instead of passing it an as parameter. I'm just showing it here to illustrate that --pidfile (in puma config) should be the same as PIDFile in the service file.
As for why the error message is that way, I'm not sure myself and is interested in the answer too.
For rvm users try with
[Unit]
Description=Puma HTTP Server
After=network.target
[Service]
Type=simple
User=user_name
Group=user_name
WorkingDirectory=/home/user_name/apps/app_name/current
Environment=RAILS_ENV=production
ExecStart=/home/user_name/.rvm/bin/rvm ruby-3.1.2#app_name do bundle exec --keep-file-descriptors puma -C /home/user_name/apps/app_name/current/config/puma/production.rb
ExecReload=/bin/kill -USR1 $MAINPID
StandardOutput=append:/home/user_name/apps/app_name/current/log/puma_access.log
StandardError=append:/home/user_name/apps/app_name/current/log/puma_error.log
SyslogIdentifier=app_name-puma
Restart=always
KillMode=process
[Install]
WantedBy=multi-user.target
if you are not using a gemset change
ExecStart=/home/user_name/.rvm/bin/rvm ruby-3.1.2#app_name ....
to
ExecStart=/home/user_name/.rvm/bin/rvm default