I'm new to storm I submitted topology but I want to shutdown storm.
First I killed topology by bin/storm kill topology_name
I want to close zookeeper, nimbus, supervisor?
if you ran it terminal without supervisor, just close terminal.
if you ran it under supervisor,
to stop zookeeper run "service zookeeper stop"
to stop supervisor run "service supervisord stop"
when supervisor shutdown, your storm will also shutdown
You can just kill the processes.
You can figure out the process ID with ps command:
ps -ef
Look for a java process that shows backtype.storm.ui.core (Nimbus) and backtype.storm.daemon.supervisor in its command line arguments?
For Zookeeper, you can just call bin/zkServer.sh stop.
Related
I'm aware, that the kafka server can be shutdown using shell script kafka-server-stop.sh and zookeeper can be shutdown using zookeeper-server-stop.sh
But, how do we stop connect-distributed.sh gracefully, I didn't find any stop shell script for connect-distributed.
Unfortunately, there isn't a stop script.
Best options other than kill command would be to use systemctl to manage the service, or use pre-built Docker images to run the server that can be stopped.
Every time I stop the kafka server and start it again it doesn't start properly and I have to restart my whole machine and start the kafka server.
Does anybody know how I can restart kafka server without having to restart my machine?
Actually I would like to terminate the consumer from last session.
Thank you,
Zeinab
If your Kafka broker is running as a service (found under /lib/systemd/system/) from a recent Confluent Platform release, you can stop it using:
systemctl stop confluent-kafka.service
or if you'd like to restart the service,
systemctl restart confluent-kafka.service
Otherwise, you can stop your broker using
./bin/kafka-server-stop.sh
and re-start it:
./bin/kafka-server-start.sh config/server.properties
If you want to stop a specific consumer, simply find the corresponding process id:
ps -ef | grep consumer_name
and kill that process:
kill -9 process_id
Or simply:
sudo systemctl restart kafka
I'm using supervisord to manage multiple processes in a docker container.
However, one process is always the 'master', and the others are monitoring and reporting processes.
What I want to do is kill supervisord if the master process fails to start after startretries.
What I tried to do is use eventlistener to kill the process:
[eventlistener:master]
events=PROCESS_STATE_FAIL
command=supervisorctl stop all
But I don't think the events subsystem is this sophisticated. I think I need to actually write an event listener to handle the events.
Is that correct? Is there a simpler way to kill the entire supervisord if one of the processes kicks?
Thanks
Another try:
[eventlistener:quit_on_failure]
events=PROCESS_STATE_FATAL
command=sh -c 'echo "READY"; while read -r line; do echo "$line"; supervisorctl shutdown; done'
Especially for docker containers, it would literaly be a killer to have a simple straightforward shutdown on errors. Container should go down when processes die.
Answered by:
supervisord event listener
The command parameter MUST be an event handler, can't be a random command.
I am running a VPS on Digital Ocean with Ubuntu 14.04.
I setup supervisor to run a bash script to export environment vars and then start celery:
#!/bin/bash
DJANGODIR=/webapps/myproj/myproj
# Activate the virtual environment
cd $DJANGODIR
source ../bin/activate
export REDIS_URL="redis://localhost:6379"
...
celery -A connectshare worker --loglevel=info --concurrency=1
Now I've noticed that supervisor does not seem to be killing these processes when I do supervisorctl stop. Furthermore, when I try to manually kill the processes they won't stop. How can I set up a better script for supervisor and how can I kill the processes that are running?
You should configurate the stopasgroup=true option into supervisord.conf file.
Because you just not only kill the parent process but also the child process.
Sending kill -9 have to kill process. If supervisorctl stop doesn't stop your process you can try setting up stopsignal to one of other values, for example QUIT or KILL.
You can see more in supervisord documentation.
I can't stop my celery worker using Supervisord, in the config file, it looks like this:
command=/usr/local/myapp/src/manage.py celery worker --concurrency=1 --loglevel=INFO
and when I try to stop it using the following command:
sudo service supervisord stop
It shows that the worker has stopped, while it is not.
One more problem, when you restart a program outside supervisord scope, it totally loses control over that program, because of the parent-child relationship between supervisord and its child processes
My question is: how to run celery workers using Monit?