I have following supervisord config(copied from this answer):
[program:myprogram]
process_name=MYPROGRAM%(process_num)s
directory=/var/www/apps/myapp
command=/var/www/apps/myapp/virtualenv/bin/python index.py --PORT=%(process_num)s
startsecs=2
user=youruser
stdout_logfile=/var/log/myapp/out-%(process_num)s.log
stderr_logfile=/var/log/myapp/err-%(process_num)s.log
numprocs=4
numprocs_start=14000
Can i do same thing with systemd?
A systemd unit can include specifiers which may be used to write a generic unit service that will be instantiated several times.
Example based on your supervisord config: /etc/systemd/system/mydaemon#.service:
[Unit]
Description=My awesome daemon on port %i
After=network.target
[Service]
User=youruser
WorkingDirectory=/var/www/apps/myapp
Type=simple
ExecStart=/var/www/apps/myapp/virtualenv/bin/python index.py --PORT=%i
[Install]
WantedBy=multi-user.target
You may then enable / start as many instances of that service using by example:
# systemctl start mydaemon#4444.service
Article with more examples on Fedora Magazine.org: systemd: Template unit files.
Related
I have a set of units that need to be run with multiple targets that come after multi-user.target.
Example:
multi-user.target <--> example1.target <--> example2.target <--> multi-use.target
Example target:
[Unit]
Description=Example target
Wants=multi-user.target
Requires=example.service
#PropagatesStopTo=example.service
Conflicts=rescue.service rescue.target
After=multi-user.target basic.target rescue.service rescue.target
Example service unit:
[Unit]
Description=Example unit
After=multi-user.target
Wants=multi-user.target
[Service]
Environment=Some Enviroment
ExecStart=Some Binary
Restart=on-failure
RestartSec=1
Type=simple
[Install]
WantedBy=example1.target example2.target
The main problem is when I try too stop the currently running target none of the Required units stop.
I've tried using PropagatesStopTo=example.service in the target with no success. Below is output:
/lib/systemd/system/example1.target:7: Unknown key name 'PropagatesStopTo' in section 'Unit', ignoring.
My systemd version is:
systemd 241 (241-166-g511646b+)
I know my systemd doesn't support PropagatesStopTo so I'm trying to find an alternative in my current systemd version.
You can add the below in the service file.
PartOf=example1.target
This adds a ConsistsOf dependency on the unit in the target.
From Systemd page
Configures dependencies similar to Requires=, but limited to stopping and restarting of units. When systemd stops or restarts the units listed here, the action is propagated to this unit. Note that this is a one-way dependency — changes to this unit do not affect the listed units.
When PartOf=b.service is used on a.service, this dependency will show
as ConsistsOf=a.service in property listing of b.service. ConsistsOf=
dependency cannot be specified directly.
Read more here
Using reference to https://docs.microfocus.com/itom/MP_for_Apache_Kafka:1.10/Kafka/Kafka_JMX,
I created the jmx_local.config and modified the Kafka start up script)
The Kafka start up script picks the jmx_local.coonfig but the port is not getting exposed.
This is what I see on grepping the java process:
"/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/bin/java -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.config.file=/usr/local/etc/kafka/jmx_local.conf kafka.Kafka /usr/local/etc/kafka/server.properties"
cat /usr/local/etc/kafka/jmx_local.conf
Dcom.sun.management.jmxremote.port=9395
Dcom.sun.management.jmxremote.authenticate=false
Dcom.sun.management.jmxremote.ssl=false
Also tried with port 10167 but the port is not enabled. Also modified as 'com.sun.management.jmxremote.port=9395'
I could see the other jmx properties.
Any suggestion please
I did grep -rl "jmxremote" /usr/local/Cellar/kafka/2.6.0, and found the jxm config was considered from bin/kafka-run-class.sh. So added 'Dcom.sun.management.jmxremote.port=9395' in bin/kafka-run-class.sh and restarted the kafka service.
To find if the port is available:
netstat -an | grep 1099
I have configured my superset in Virtual env want to run it as a service
I have tried using below config but its not working
[Unit]
Description=superset service
After=network.target
[Service]
Type=simple
User=superset
Group=superset
Environment=PATH=/home/ubuntu/code/superset:$PATH
Environment=PYTHONPATH=/var/superset/superset:$PYTHONPATH
ExecStart=/home/ubuntu/code/superset/superset runserver
[Install]
WantedBy=multi-user.target
Virtual Env folder is Superset
I get the below error
/etc/init.d/superset: 1: /etc/init.d/superset: [Unit]: not found
Usage: service < option > | --status-all | [ service_name [ command |
--full-restart ] ] /etc/init.d/superset: 5: /etc/init.d/superset: [Service]: not found
Actually the superset runserver is used for development mode and it is highly recommended other tools like gunicorn for production.
Anyway, the main problem is that superset path on the virutalenv is $VENV_PATH/bin/superset (in general the applications that treat like binary applications like superset or airflow, etc servers on this path: $VENV_PATH/bin and the easy way to find the path of any application on Linux systems is to use which command that in this case, you can use which superset to find the superset path ).
This is the superset service file that I use it on the production, hope to useful:
[Unit]
Description = Apache Superset Webserver Daemon
After = network.target
[Service]
PIDFile = /home/superset/superset-webserver.PIDFile
User = superset
Group = superset
Environment=SUPERSET_HOME=/home/superset
Environment=PYTHONPATH=/home/superset
WorkingDirectory = /home/superset
ExecStart =/home/superset/venv/bin/python3.7 /home/superset/venv/bin/gunicorn --workers 8 --worker-class gevent --bind 0.0.0.0:8888 --pid /home/superset/superset-webserver.PIDFile superset:app
ExecStop = /bin/kill -s TERM $MAINPID
[Install]
WantedBy=multi-user.target
I am using kubernetes load-lanacer(Here the haproxy configuration is written in every 10s and restarted). Since I want to pass the socket connection while reloading the HAProxy, I changed the Dockerfile of the HAProxy such that it uses HAProxy 1.8-dev2 version. The image used is haproxytech/haproxy-ubuntu:1.8-dev2. Also I added the following line under the global section of the template.cfg file(This is the template in which the HAProxy configuration is written)
stats socket /var/run/haproxy/admin.sock mode 660 level admin expose-fd listeners
Also I changed the reload command in haproxy_reload file as follows
haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -x /var/run/haproxy/admin.sock -sf $(cat /var/run/haproxy.pid)
Once I run the docker image I get the following error.(kubectl create -f rc.yaml --namespace load-balancer)
W1027 07:13:37.922565 5 service_loadbalancer.go:687] Requeuing kube-system/kube-dns because of error: error restarting haproxy -- [WARNING] 299/071337 (21) : We didn't get the expected number of sockets (expecting 1347703880 got 0)
[ALERT] 299/071337 (21) : Failed to get the sockets from the old process!
: exit status 1
FYI:
I commented the stats socket line in the template.cfg file and ran the docker image to verify whether the restart command identifies the socket. The same error occurred. Seems like the soft restart command doesn't identify the stats socket created by the HAProxy.
Here my zkServer.cmd file :
#echo off
setlocal
call "%~dp0zkEnv.cmd"
set ZOOMAIN=org.apache.zookeeper.server.quorum.QuorumPeerMain
echo on
call %JAVA% "-Dzookeeper.log.dir=%ZOO_LOG_DIR%" "-Dzookeeper.root.logger=%ZOO_LOG4J_PROP%" -cp "%CLASSPATH%" %ZOOMAIN% "%ZOOCFG%" %*
endlocal
The skServer.sh script will run the zkEnv.sh script which in-turn will look for a script '../conf/zookeeper-env.sh'
create a file on the conf folder called zookeeper-env.sh
Paste this into the file and restart Zookeeper:
JMXLOCALONLY=false
JMXDISABLE=false
JMXPORT=4048
JMXAUTH=false
JMXSSL=false
First obtain the hostname (or reachable IP eg. lan/public/NAT address):
hostname -i
# or find ip
ip a
next add following options to ZOOMAIN (assumed hostname my.remoteconsole.org and desired port 8989)
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.port=8989
-Djava.rmi.server.hostname=my.remoteconsole.org
More details about available options in java docs (http://docs.oracle.com/javase/8/docs/technotes/guides/management/agent.html).
ADD org.apache.zookeeper.server.quorum.QuorumPeerMain in server-start.
The class org.apache.zookeeper.server.quorum.QuorumPeerMain will start a JMX manageable ZooKeeper server. This class registers the proper MBeans during initalization to support JMX monitoring and management of the instance.
In addition to above answer by Marcell du Plessis, if you are running zookeeper as a systemd service, then you can specify jmx port in the environment variable.
[Unit]
Description=Apache Kakfa Zookeeper
Requires=network.target
After=network.target
[Service]
Type=simple
User=user
Group=users
ExecStart=/your-zookeeper-install-path/bin/zkServer.sh start
ExecStop=/your-zookeeper-install-path/bin/zkServer.sh stop
TimeoutStopSec=180
Restart=on-failure
Environment="JMX_PORT=9999"
[Install]
WantedBy=multi-user.target
Alias=zookeeper.service