I'm trying to make periodic tasks using Celery in my Django project. I'm very struggling to understand how Celery works, and now it started showing something, but I don't know how to stop workers.
At fist, I run this command to start Celery beat.
celery -A proj beat
and then, run this command to start a worker
celery -A proj worker -B
No matter what I do, previous workers are still working. Even though I updated codes and stop the worker with Ctrl+c, they are still running. How can I stop all of them?
[2018-07-25 15:53:49,694: WARNING/ForkPoolWorker-2] Yo
[2018-07-25 15:53:50,224: WARNING/ForkPoolWorker-3] hello
[2018-07-25 15:53:52,694: WARNING/ForkPoolWorker-2] Yo
[2018-07-25 15:53:55,694: WARNING/ForkPoolWorker-3] Yo
[2018-07-25 15:53:58,694: WARNING/ForkPoolWorker-2] Yo
[2018-07-25 15:54:00,227: WARNING/ForkPoolWorker-3] world
[2018-07-25 15:54:00,229: WARNING/ForkPoolWorker-2] hello
Shutdown should be accomplished using the TERM signal.
Method1:
$ pkill -9 -f 'celery worker'
Method 2:
$ ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9
Official Document: here
Related
I executed below command:
kubectl proxy --port=8081 &
kubectl proxy --port=8082 &
and of course I have 2 accessible endpoints:
curl http://localhost:8081/api/
curl http://localhost:8082/api/
But in the same time two running processes serving the same content.
How to stop one of these processes in "kubectl" manner?
Of course, I can kill the process but it seems to be a less elegant way...
I believe the "kubectl way" is to not background the proxy at all as it is intended to be a short running process to access the API on your local machine without further authentication.
There is no way to stop it other than kill or ^C (if not in background).
You can use standard shell tricks though, so executing fg then ^C will work or kill %1
Run this command to figure out the process id (pid):
netstat -tulp | grep kubectl
Then run sudo kill -9 <pid> to kill the process.
ps -ef | grep "kubectl proxy"
will show you the PID of the process
Then you can stop it with
kill -9 <pid>
Depending on the platform you could wrap the proxy in service / daemon, but seems like overkill I would just add aliases or functions to start and source them in your terminal/shell profile to make it easier.
https://developer.apple.com/library/archive/documentation/MacOSX/Conceptual/BPSystemStartup/Chapters/CreatingLaunchdJobs.html
or
kubectl-proxy-start() {
kubectl proxy &
}
kubectl-proxy-kill() {
pkill -9 -f "kubectl proxy"
}
The following works for me in the MacOS
pkill -9 -f "kubectl proxy"
Filter (grep) all "kube" pids and kill with loop:
for pid in `netstat -tulp | grep kube | awk '{print $7}' | awk -F"/" '{print $1}'| uniq`
do
kill -9 $pid
done
Try this, using your port #s of course
$ pkill -f 'kubectl proxy --port=8080'
In Windows, to stop the background job in Powershell, it is:
get-job
Stop-Job Job1
I have an upstart init script, but my dev/testing/production have different numbers of cpus/cores. I'd like to compute the number of worker processes to be 4 * number of cores within the init script
The upstart docs say that the script stanzas use /bin/sh syntax.
I created /bin/sh script to see what was going on. I'm getting drastically different results than my upstart script.
script stanza from my upstart script:
script
# get the number of cores
CORES=`lscpu | grep -v '#' | wc -l`
# set the number of worker processes to 4 * num cores
WORKERS=$(($CORES * 4))
echo exec gunicorn -b localhost:8000 --workers $WORKERS tutalk_site.wsgi > tmp/gunicorn.txt
end script
which outputs:
exec gunicorn -b localhost:8000 --workers 76 tutalk_site.wsgi
my equivalent /bin/sh script
#!/bin/sh
CORES=`lscpu -p | grep -v '#' | wc -l`
WORKERS=$(($CORES * 4))
echo exec gunicorn -b localhost:8000 --workers $WORKERS tutalk_site.wsgi
which outputs:
exec gunicorn -b localhost:8000 --workers 8 tutalk_site.wsgi
I'm hoping this is a rather simple problem and a few other pairs of eyes will locate the issue.
Any help would be appreciated.
I suppose I should have answered this several days ago. I first attempted using environment variables instead but didn't have any luck.
I solved the issue by replacing the computation with a python one-liner
WORKERS=$(python -c "import os; print os.sysconf('SC_NPROCESSORS_ONLN') * 2")
and that worked out just fine.
still curious why my bourne-shell script came up with the correct value while the upstart script, whose docs say use bourne-shell syntax didn't
Hi great people of stackoverflow,
Were hosting a docker container on EB with an nodejs based code running on it.
When redeploying our docker container we'd like the old one to do a graceful shutdown.
I've found help & guides on how our code could receive a sigterm signal produced by 'docker stop' command.
However further investigation into the EB machine running docker at:
/opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh
shows that when "flipping" from current to the new staged container, the old one is killed with 'docker kill'
Is there any way to change this behaviour to docker stop?
Or in general a recommended approach to handling graceful shutdown of the old container?
Thanks!
Self answering as I've found a solution that works for us:
tl;dr: use .ebextensions scripts to run your script before 01flip, your script will make sure a graceful shutdown of whatevers inside the docker takes place
first,
your app (or whatever your'e running in docker) has to be able to catch a signal, SIGINT for example, and shutdown gracefully upon it.
this is totally unrelated to Docker, you can test it running wherever (locally for example)
There is a lot of info about getting this kind of behaviour done for different kind of apps on the net (be it ruby, node.js etc...)
Second,
your EB/Docker based project can have a .ebextensions folder that holds all kinda of scripts to execute while deploying.
we put 2 custom scripts into it, gracefulshutdown_01.config and gracefulshutdown_02.config file that looks something like this:
# gracefulshutdown_01.config
commands:
backup-original-flip-hook:
command: cp -f /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh /opt/elasticbeanstalk/hooks/appdeploy/01flip.sh.bak
test: '[ ! -f /opt/elasticbeanstalk/hooks/appdeploy/01flip.sh.bak ]'
cleanup-custom-hooks:
command: rm -f 05gracefulshutdown.sh
cwd: /opt/elasticbeanstalk/hooks/appdeploy/enact
ignoreErrors: true
and:
# gracefulshutdown_02.config
commands:
reorder-original-flip-hook:
command: mv /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh /opt/elasticbeanstalk/hooks/appdeploy/enact/10flip.sh
test: '[ -f /opt/elasticbeanstalk/hooks/appdeploy/enact/01flip.sh ]'
files:
"/opt/elasticbeanstalk/hooks/appdeploy/enact/05gracefulshutdown.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/sh
# find currently running docker
EB_CONFIG_DOCKER_CURRENT_APP_FILE=$(/opt/elasticbeanstalk/bin/get-config container -k app_deploy_file)
EB_CONFIG_DOCKER_CURRENT_APP=""
if [ -f $EB_CONFIG_DOCKER_CURRENT_APP_FILE ]; then
EB_CONFIG_DOCKER_CURRENT_APP=`cat $EB_CONFIG_DOCKER_CURRENT_APP_FILE | cut -c 1-12`
echo "Graceful shutdown on app container: $EB_CONFIG_DOCKER_CURRENT_APP"
else
echo "NO CURRENT APP TO GRACEFUL SHUTDOWN FOUND"
exit 0
fi
# give graceful kill command to all running .js files (not stats!!)
docker exec $EB_CONFIG_DOCKER_CURRENT_APP sh -c "ps x -o pid,command | grep -E 'workers' | grep -v -E 'forever|grep' " | awk '{print $1}' | xargs docker exec $EB_CONFIG_DOCKER_CURRENT_APP kill -s SIGINT
echo "sent kill signals"
# wait (max 5 mins) until processes are done and terminate themselves
TRIES=100
until [ $TRIES -eq 0 ]; do
PIDS=`docker exec $EB_CONFIG_DOCKER_CURRENT_APP sh -c "ps x -o pid,command | grep -E 'workers' | grep -v -E 'forever|grep' " | awk '{print $1}' | cat`
echo TRIES $TRIES PIDS $PIDS
if [ -z "$PIDS" ]; then
echo "finished graceful shutdown of docker $EB_CONFIG_DOCKER_CURRENT_APP"
exit 0
else
let TRIES-=1
sleep 3
fi
done
echo "failed to graceful shutdown, please investigate manually"
exit 1
gracefulshutdown_01.config is a small util that backups the original flip01 and deletes (if exists) our custom script.
gracefulshutdown_02.config is where the magic happens.
it creates a 05gracefulshutdown enact script and makes sure flip will happen afterwards by renaming it to 10flip.
05gracefulshutdown, the custom script, does this basically:
find current running docker
find all processes that need to be sent a SIGINT (for us its processes with 'workers' in its name
send a sigint to the above processes
loop:
check if processes from before were killed
continue looping for an amount of tries
if tries are over, exit with status "1" and dont continue to 10flip, manual interference is needed.
this assumes you only have 1 docker running on the machine, and that you are able to manually hop on to check whats wrong in the case it fails (for us never happened yet).
I imagine it can also be improved in many ways, so have fun.
I am on Linux Centos OS. I understand that using "rpm -qa" gives a lot of install paths for the corresponding package. However, I need just the base package install location for the package. Is there any way/command/option in Linux to retrieve the same? My code snippet is to retrieve list of running services and the corresponding package installed is as below:-
for i in $(service --status-all | grep -v "not running" | grep -E running\|stopped | awk '{print $1}');
do
packagename=$(rpm -qf /etc/init.d/$i)
servicestatus=$(service --status-all | grep $i | awk '{print $NF}' | sed 's/...//g' | sed 's/.//g');
echo $tdydate, $(ip route get 8.8.8.8 | awk 'NR==1 {print $NF}'), $i, $packagename, $servicestatus > "$HOME/MyLog/running_services.csv"
done
Now, I need to also get the corresponding package install location as well which is hosting the running service. Is there a way to retrieve this as well along with getting the package names. Please confirm.
Thanks in advance for extending help.
Regards.
Okay, with your answer to my question in the comments, which is much clearer to me than you initial question...
Hi, basically what i need is:- I get a list of all installed services on my Centos using service --status-all. Now, for each service, I need to know the corresponding application package location on linux.
...I'll propose this (tested here on CentOS 6.6):
#!/bin/bash
for i in `chkconfig --list | awk '{ print $1}'`; do
service $i status >/dev/null 2>&1
if [ $?==0 ]; then
rpm -qf /etc/init.d/$i
fi
done | sort | uniq
That spits out all rpm names of the services which are currently running.
A bit more detail as to why your current approach is not going to work:
service --status-all is not going to return information which can be parsed reliably. For example, the output on a VM here:
acpid (pid 872) is running...
auditd (pid 789) is running...
Stopped
cgred is stopped
Checking for service cloud-init:Checking for service cloud-init:Checking for service cloud-init:Checking for service cloud-init:crond (pid 1088) is running...
ip6tables: Firewall is not running.
iptables: Firewall is not running.
Kdump is not operational
mdmonitor is stopped
netconsole module not loaded
Configured devices:
lo eth0
Currently active devices:
lo eth0
ntpd (pid 997) is running...
master (pid 1076) is running...
rdisc is stopped
restorecond is stopped
rsyslogd (pid 809) is running...
sandbox is stopped
saslauthd is stopped
openssh-daemon (pid 988) is running...
Some services don't even return their name (third line). Some say stopped, others not running. If you parse the first column of chkconfig --list you know all the service names, which correspond to files in /etc/init.d. Then you can query their status individually and read the return code ($?), which is 0 for running services (or generally for success in the Unix/Linux world), 1 or higher for not running or not installed or incomplete/malfunctioning services.
Armed with names in /etc/init.d/ you can then query the owning package with rpm -qf /etc/init.d/<servicename> and get exactly what I think you were looking for.
Edit: added | sort | uniq after the loop, because some packages contain multiple services, like for example cloud-init, which creates four different services on CentOS. So you sort the list, then make sure you only get distinct (uniq) names back.
Works for me:
acpid-1.0.10-2.1.el6.x86_64
audit-2.3.7-5.el6.x86_64
cloud-init-0.7.5-10.el6.centos.2.x86_64
cronie-1.4.4-12.el6.x86_64
cyrus-sasl-2.1.23-15.el6_6.1.x86_64
initscripts-9.03.46-1.el6.centos.1.x86_64
iptables-1.4.7-14.el6.x86_64
iptables-ipv6-1.4.7-14.el6.x86_64
iputils-20071127-17.el6_4.2.x86_64
kexec-tools-2.0.0-280.el6.x86_64
libcgroup-0.40.rc1-15.el6_6.x86_64
mdadm-3.3-6.el6.x86_64
ntp-4.2.6p5-1.el6.centos.x86_64
ntpdate-4.2.6p5-1.el6.centos.x86_64
openssh-server-5.3p1-104.el6_6.1.x86_64
policycoreutils-2.0.83-19.47.el6_6.1.x86_64
postfix-2.6.6-6.el6_5.x86_64
rsyslog-5.8.10-9.el6_6.x86_64
udev-147-2.57.el6.x86_64
You are looking for --whatprovides instead of -qf (which does formatting).
Tweaking your example...
for i in $(chkconfig --list | awk '{ print $1}'); do service $i status >/dev/null 2>&1; if [ 0==$? ]; then echo -n "$i: "; rpm -q --whatprovides /etc/init.d/$i; fi; done | sort
FYI - this doesn't work on more modern systemd-based systems (CentOS 7).
Example on my Fedora 21 box:
Note: This output shows SysV services only and does not include native
systemd services. SysV configuration data might be overridden by native
systemd configuration.
If you want to list systemd services use 'systemctl list-unit-files'.
To see services enabled on particular target use
'systemctl list-dependencies [target]'.
netconsole: initscripts-9.56.1-5.fc21.x86_64
network: initscripts-9.56.1-5.fc21.x86_64
I started a Dancer/Starman server using:
sudo plackup -s Starman -p 5001 -E deployment --workers=10 -a mywebapp/bin/app.pl
but I'm unsure how I can stop the server. Can someone provide me with a quick way of stopping it and all the workers it has spawned?
Use the
--pid /path/to/the/pid.file
and you can kill the process based on his PID
So, using the above options, you can use
kill $(cat /path/to/the/pid.file)
the pid.file simply stores the master's PID - don't need analyze the ps output...
pkill -f starman
Kill processes based on name.
On Windows you can do "CTRL + C" like making a copy but Cancel in this case. Tested working.