Can't start a systemd service - service

I am trying to create a service that runs the console to convert all my crontab commands to systemd in the future, but I always get this error, I have tried different tutorials and the same problem.
# systemctl status hello-world.service
● hello-world.service - Hello World Service
Loaded: loaded (/usr/lib/systemd/system/hello-world.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since mié 2019-10-09 10:06:59 CEST; 4s ago
Process: 26080 ExecStart=/usr/share/nginx/html/scripts-systemd/hello-world.sh (code=exited, status=203/EXEC)
Main PID: 26080 (code=exited, status=203/EXEC)
oct 09 10:06:59 ns37 systemd[1]: Started Hello World Service.
oct 09 10:06:59 ns37 systemd[1]: hello-world.service: main process exited, code=exited, status=203/EXEC
oct 09 10:06:59 ns37 systemd[1]: Unit hello-world.service entered failed state.
oct 09 10:06:59 ns37 systemd[1]: hello-world.service failed.
hello-world.sh file
#!/bin/bash
while $(sleep 30);
do
echo "hello world"
done
hello-world.service file
[Unit]
Description=Hello World Service
After=systend-user-sessions.service
[Service]
Type=simple
ExecStart=/usr/share/nginx/html/scripts-systemd/hello-world.sh
[Install]
WantedBy=multi-user.target
Im using Centos 7
Edit:
What I need to do is execute console commands at certain times every day due to problems with crontab.
I was using this example to check that everything works and once it works change the commands.
Here is an example of a crontab command:
*/10 * * * * cd /usr/share/nginx/html/mywebsite.com; php wp-cron.php >/dev/null 2>&1
0 0 */3 * * date=date -I; zip -r /root/copias/copia-archivos-html-webs$date.zip /usr/share/nginx/html`
15 15 * * * wget -q -O /dev/null https://mywebsite.com/?run_plugin=key_0_0
Edit2: Done! I've managed to do it and works for now, I leave the code here so it can be useful for other people
hello-world.sh file
#!/usr/bin/env bash
/usr/bin/mysqldump -user -pass db_name >/root/copias/backupname.sql
hello-world.service
[Unit]
Description=CopiaSql
[Service]
Type=oneshot
ExecStart=/bin/bash /usr/share/nginx/html/scripts-systemd/hello-world.sh
[Install]
WantedBy=multi-user.target
hello-world.timer
[Unit]
Description=Runs every 2 minutes test.sh
[Timer]
OnCalendar=*:0/2
Unit=hello-world.service
[Install]
WantedBy=timers.target
Thanks everyone for your help!

I have a first boot install service that displays on the console and it looks like:
[Unit]
After=multi-user.target
# tty getty service login promts for tty1 & tty6
# will not be seen until this install completes.
Before=getty#tty1.service getty#tty6.service
[Service]
Type=oneshot
ExecStart=/bin/bash -c "export TERM=vt100;/var/ssi/firstboot_install.sh"
StandardOutput=tty
StandardInput=tty
[Install]
WantedBy=multi-user.target
My script that runs also has this code to start
#---------------------------------------------------------------------
# Switch to tty6 so input is allowed from installation questions
# Change back to tty1 at end of this script to show normal booting
# messages from systemd.
#---------------------------------------------------------------------
exec < /dev/tty6 > /dev/tty6
chvt 6
at the end of this script I change it back
# Now that the system has been registered and has a few channels added,
# I can have the installation go back to the main Anaconda output screen
# on tty1
chvt 1
exec < /dev/tty1 > /dev/tty1
exit 0
This might not be exactly what you want but you can adapt it to your needs. The goal here is to display something on the console which starts during the boot sequence. My script which asks a number of installation questions where input is NOT allowed on tty1 (console) which is why I change to tty6 so input is allowed during the first boot installation.
Your script try:
#!/bin/bash
exec < /dev/tty6 > /dev/tty6
chvt 6
while $(sleep 30);
do
echo "hello world"
done
chvt 1
exec < /dev/tty1 > /dev/tty1
This might be overkill for what your trying to do but if you need input
from the console, you should do the same with tty6

Related

How to run a process in daemon mode with systemd service?

I've googled and read quite a bit of blogs, posts, etc. on this. I've also been trying them out manually on my EC2 instance. However, I'm still not able to properly configure the systemd service unit to have it run the process in background as I expect. The process I'm running is nessus service. Here's my service unit definition:
$ cat /etc/systemd/system/nessusagent.service
[Unit]
Description=Nessus
[Service]
ExecStart=/opt/myorg/bin/init_nessus
Type=simple
[Install]
WantedBy=multi-user.target
and here is my script /opt/myorg/bin/init_nessus:
$ cat /opt/apiq/bin/init_nessus
#!/usr/bin/env bash
set -e
NESSUS_MANAGER_HOST=...
NESSUS_MANAGER_PORT=...
NESSUS_CLIENT_GROUP=...
NESSUS_LINKING_KEY=...
#-------------------------------------------------------------------------------
# link nessus agent with manager host
#-------------------------------------------------------------------------------
/opt/nessus_agent/sbin/nessuscli agent link --key=${NESSUS_LINKING_KEY} --host=${NESSUS_MANAGER_HOST} --port=${NESSUS_MANAGER_PORT} --groups=${NESSUS_CLIENT_GROUP}
if [ $? -ne 0 ]; then
echo "Cannot link the agent to the Nessus manager, quitting."
exit 1
fi
/opt/nessus_agent/sbin/nessus-service -q -D
When I run the service, I always get the following:
$ systemctl status nessusagent.service
● nessusagent.service - Nessus
Loaded: loaded (/etc/systemd/system/nessusagent.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Mon 2020-08-24 06:40:40 UTC; 9min ago
Process: 27787 ExecStart=/opt/myorg/bin/init_nessus (code=exited, status=0/SUCCESS)
Main PID: 27787 (code=exited, status=0/SUCCESS)
...
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: + /opt/nessus_agent/sbin/nessuscli agent link --key=... --host=... --port=8834 --groups=...
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: [info] [agent] HostTag::getUnix: setting TAG value to '8596420322084e3ab97d3c39e5c92e00'
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: [info] [agent] Successfully linked to <myorg.com>:8834
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: + '[' 0 -ne 0 ']'
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[28506]: + /opt/nessus_agent/sbin/nessus-service -q -D
However, I can't see the process that I expect to see:
$ ps faux | grep nessus
root 28565 0.0 0.0 12940 936 pts/0 S+ 06:54 0:00 \_ grep --color=auto nessus
If I run the last command manually, I can see it:
$ /opt/nessus_agent/sbin/nessus-service -q -D
$ ps faux | grep nessus
root 28959 0.0 0.0 12940 1016 pts/0 S+ 07:00 0:00 \_ grep --color=auto nessus
root 28952 0.0 0.0 6536 116 ? S 07:00 0:00 /opt/nessus_agent/sbin/nessus-service -q -D
root 28953 0.2 0.0 69440 9996 pts/0 Sl 07:00 0:00 \_ nessusd -q
What is it that I'm missing here?
Eventually figured out that this was because of the extra -D option in the last command. Removing the -D option fixed the issue. Running the process in daemon mode inside a system manager is not the way to go. We need to run it in the foreground and let the system manager handle it.

Program.service ExecStart fails but the program itself runs

I am testing how to run a script using a .service file on CentOS7.
The script is a very simple loop just to make sure it runs:
if [ "$1" == "start" ] || [ "$1" == "cycle" ]
then
/u/Test/Bincustom/haltrun_wrap.sh run &
echo $! /u/Test/Locks/start.pid
exit
elif [ "$1" == "stop" ] || [ "$1" == "halt" ]
then
killall -q -9 haltrun_wrap.sh
echo " " /u/Test/Locks/start.pid
elif [ "$1" == "run" ]
then
process_id=$(pidof haltrun_wrap.sh)
#echo $process_id /u/Test/Locks/start.pid
while [ 1 ]
do
CurTime=$(date)
echo $CurTime /u/Test/Logs/log
sleep 30s
done
else
cat /u/Test/Locks/start.pid
cat /u/Test/Logs/log
fi
That script runs fine as the root or test user if i launch manually.
The Program.service file looks like this:
[Unit]
Description=Program
[Service]
Type=forking
RemainAfterExit=yes
PIDFile=/u/Test/Locks/start.pid
EnvironmentFile=/u/Test/Config/environ
Environment="Base="sudo -u sirsi '/u/Test/Bincustom/Program " "Stop=halt force'" "Start=cycle force'""
ExecStart=/bin/sh $Base$Start
ExecStop=/bin/sh $Base$Stop
[Install]
WantedBy=multi-user.target
WantedBy=WebServices
WantedBy=BCA
The error is always:
● Program.service - Program
Loaded: loaded (/usr/lib/systemd/system/Program.service; enabled; vendor preset: disabled)
Active: failed (Result: resources) since Wed 2017-01-11 14:53:10 MST; 1s ago
Process: 12014 ExecStart=/bin/sh $Base$Start (code=exited, status=0/SUCCESS)
Jan 11 14:53:09 localhost.localdomain systemd[1]: Starting Program...
Jan 11 14:53:10 localhost.localdomain systemd[1]: PID file /u/Test/Locks/start.pid not readable (yet?) after start.
Jan 11 14:53:10 localhost.localdomain systemd[1]: Failed to start Program.
Jan 11 14:53:10 localhost.localdomain systemd[1]: Unit Program.service entered failed state.
Jan 11 14:53:10 localhost.localdomain systemd[1]: Program.service failed.
Obviously I'm doing something wrong in the .service but for the life of me I am still missing it.
The issue was in the line:
Environment="Base="sudo -u sirsi '/u/Test/Bincustom/Program " "Stop=halt force'" "Start=cycle force'""
ExecStart=/bin/sh $Base$Start
ExecStop=/bin/sh $Base$Stop
Apparently .service files do not recognize variables.
I also had an issue with sudo not being allowed to run my test script.
i had to add the sudo into the test script.

How to run different files conditionally in systemd config?

I hope this isnt a duplicate question. Systemd is really hard to search for....
I have a systemd file that looks like
[Unit]
Description=My Daemon
[Service]
User=root
Type=simple
PIDFile=/var/run/app.pid
ExecStart=/usr/bin/python /opt/app/app.pyc
Restart=always
[Install]
WantedBy=multi-user.target
I want ExecStart to run /usr/bin/python /opt/app/app.pyc if it exists and run /usr/bin/python /opt/app/app.py if it doesnt.
The goal that on the deployed system there will not be a py file only a pyc but on dev systems we might only have a py file. How can I get this to work?
Make a small bash script which does what you want and then put that script on the ExecStart line.
#!/bin/bash
if [ -f /opt/app/app.pyc ];
then
exec /opt/app/app.pyc
else
exec /opt/app/app.py
fi

postgresql can not start after change the data_directory

I use postgresql on Debian.
The postgresql service can not start after I edit the config file:
#data_directory = '/var/lib/postgresql/9.4/main' # use data in another directory
data_directory = '/opt/data/postgresql/data'
(yeah,I just use custom directory instead of the default data_directory)
I find the log in /var/log/syslog
Sep 14 10:22:17 thinkserver-ckd postgresql#9.4-main[11324]: Error: could not exec /usr/lib/postgresql/9.4/bin/pg_ctl /usr/lib/postgresql/9.4/bin/pg_ctl start -D /opt/data/postgresql/data -l /var/log/postgresql/postgresql-9.4-main.log -s -o -c config_file="/etc/postgresql/9.4/main/postgresql.conf" :
Sep 14 10:22:17 thinkserver-ckd systemd[1]: postgresql#9.4-main.service: control process exited, code=exited status=1
Sep 14 10:22:17 thinkserver-ckd systemd[1]: Failed to start PostgreSQL Cluster 9.4-main.
Sep 14 10:22:17 thinkserver-ckd systemd[1]: Unit postgresql#9.4-main.service entered failed state.
And nothing in /var/log/postgresql/postgresql-9.4-main.log
Thanks.
I finally got this answer:
What this error means in PostgreSQL?
#langton 's answer.
He said that
you should run pg_upgradecluster or similar, or just create a new cluster with pg_createcluster (these commands are for debian systems - you didn't specify your OS)
So I executed the command:
pg_createcluster -d /opt/data/postgresql/data -l /opt/data/postgresql/log 9.4 ckd
And then :
service postgresql restart
it started!
If downtime is allowed and you already have databases with data in the old cluster location you only need to physically copy the data to the new location.
This is a more or less common operation if you partition is out of space.
# Check that current data directory is the same that
# the one in the postgresql.conf config file
OLD_DATA_DIR=$(sudo -u postgres psql --no-psqlrc --no-align --tuples-only --quiet -c "SHOW data_directory;")
echo "${OLD_DATA_DIR}"
CONFIG_FILE=$(sudo -u postgres psql --no-psqlrc --no-align --tuples-only --quiet -c "SHOW config_file;")
echo "${CONFIG_FILE}"
# Stop PostgreSLQ
systemctl stop postgresql
# Change the data directory in the config
# Better to do it with an editor, instead of sed
NEW_DATA_DIR='/opt/data/postgresql/data'
sed -i "s%data_directory = '${OLD_DATA_DIR}'%data_directory = '${NEW_DATA_DIR}'%" "${CONFIG_FILE}"
# Move/Copy the data for example using rsync
rsync -av --dry-run "${OLD_DATA_DIR}" "${NEW_DATA_DIR}"
# Take care with the classical issues of rsync and end backslashes
rsync -av "${OLD_DATA_DIR}" "${NEW_DATA_DIR}"
# Rename the old dir, just to avoid missunderstandings and set
# check the permissions on the new one
# Start postgres
systemctl start postgresql
# Check that everything goes well and eventually drop the old data
# Make sure that the logs and everything else is where you want.

Stopping supervisord: Shut down

I tired to start supervisor but getting error. Can anyone help? Thanks
/etc/init.d/supervisord file.
SUPERVISORD=/usr/local/bin/supervisord
SUPERVISORCTL=/usr/local/bin/supervisorctl
case $1 in
start)
echo -n "Starting supervisord: "
$SUPERVISORD
echo
;;
stop)
echo -n "Stopping supervisord: "
$SUPERVISORCTL shutdown
echo
;;
restart)
echo -n "Stopping supervisord: "
$SUPERVISORCTL shutdown
echo
echo -n "Starting supervisord: "
$SUPERVISORD
echo
;;
esac
Then run these
sudo chmod +x /etc/init.d/supervisord
sudo update-rc.d supervisord defaults
sudo /etc/init.d/supervisord start
And getting this:
Stopping supervisord: Shut down
Starting supervisord: /usr/local/lib/python2.7/dist-packages/supervisor/options.py:286: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
Error: Another program is already listening on a port that one of our HTTP servers is configured to use. Shut this program down first before starting supervisord.
For help, use /usr/local/bin/supervisord -h
Conf file (located at /etc/supervisord.conf):
[unix_http_server]
file=/tmp/supervisor.sock; (the path to the socket file)
[supervisord]
logfile=/tmp/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
loglevel=info ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock; use a unix:// URL for a unix socket
[program:myproject]
command=/home/richard/envs/myproject_stage/bin/python /home/richard/webapps/myproject/manage.py run_gunicorn -b 127.0.0.1:8002 --log-file=/tmp/myproject_stage_gunicorn.log
directory=/home/richard/webapps/myproject/
user=www-data
autostart=true
autorestart=true
stdout_logfile=/tmp/myproject_stage_supervisord.log
redirect_stderr=true
first of all, type this on your console or terminal
ps -ef | grep supervisord
You will get some pid of supervisord just like these
root 2641 12938 0 04:52 pts/1 00:00:00 grep --color=auto supervisord
root 29646 1 0 04:45 ? 00:00:00 /usr/bin/python /usr/local/bin/supervisord
if you get output like that, your pid is the second one. then if you want to shut down your supervisord you can do this
kill -s SIGTERM 29646
hope it helpful. ref: http://supervisord.org/running.html#signals
sudo unlink /tmp/supervisor.sock
This .sock file is defined in /etc/supervisord.conf's [unix_http_server]'s file config value (default is /tmp/supervisor.sock).
$ ps aux | grep supervisor
alexamil 54253 0.0 0.0 2506960 6440 ?? Ss 10:09PM 0:00.26 /usr/bin/python /usr/local/bin/supervisord -c supervisord.conf
so we can use:
$ pkill -f supervisord # kill it
This is what works for me. Run the following in the Terminal (For Linux machines)
To check if the process is running:
sudo systemctl status supervisor
To stop the process:
sudo systemctl stop supervisor
Try running these commands
sudo unlink /run/supervisor.sock
and
sudo /etc/init.d/supervisor start
As of version 3.0a11, you could do this one-liner:
sudo kill -s SIGTERM $(sudo supervisorctl pid) which hops on the back of the supervisorctl pid function.
There are many answers already available. I shall present a cleaner way to shut down supervisord.
supervisord by default, creates a file named supervisord.pid in the directory where supervisord.conf file exists. That file consists the pid of the supervisord daemon. Read the pid from the file and kill the supervisord process.
However, you can configure where the supervisord.pid file should be created. Refer this link to configure it, http://supervisord.org/configuration.html