I'm using Cerely to manage delayed task on my django project.
I got problem when I tried to shutdown celery as suggested in the manual.
>> ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9
kill: No such process
>> ps auxww | grep 'celery worker' | awk '{print $2}
28630
>> ps auxww | grep 'celery worker' | awk '{print $2}
28633
PID continuosly changes and it makes hard to send killing signal.
How can I solve this problem? Thanks in advance.
[ Update ]
django settings.py
import djcelery
djcelery.setup_loader()
BROKER_URL = 'amqp://guest:guest#localhost:5672/' # Using RabbitMQ
CELERYD_MAX_TASKS_PER_CHILD = 1
PID check (After reboot)
>> ps auxww | grep 'celery worker' | awk '{print $2}'
3243
>> manage.py celery worker --loglevel=info
celery#{some id value}.... ready
>> ps auxww | grep 'celery worker' | awk '{print $2}'
3285
3293
3296
>> ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9
kill: No such process
>> ps auxww | grep 'celery worker' | awk '{print $2}'
3321
>> ps auxww | grep 'celery worker' | awk '{print $2}'
3324
Question
At least, one celery worker remains even though rebooted. And its PID changes continuously.
Celery daemon executes two workers at once. How can I fix it to only one worker ?
It is doing precisely what you asked for with the CELERYD_MAX_TASKS_PER_CHILD setting:
Maximum number of tasks a pool worker process can execute before it’s replaced with a new one.
Apparently you wanted to run one worker process, but that is controlled by a different setting, namely CELERYD_CONCURRENCY.
So, replace
CELERYD_MAX_TASKS_PER_CHILD = 1
with
CELERYD_CONCURRENCY = 1
Related
When I run this command:
bjobs -r -P xenon -W | awk '{print $7}' | grep -v JOB_NAME |
cut -f 1 -d ' ' | xargs
in a terminal, all running JOB_NAMES are coming, but when I do this in per_script only JOB_ID are coming.
Perl script code is below:
#dummy_jobs = qx/bjobs -r -P xenon -W | awk '{print $7}' | grep -v JOB_NAME | cut -f 1 -d ' ' | xargs/;
What needs to be changed in Perl?
qx/.../ literals are very much like double-quoted strings. Specifically, $7 is interpolated, so you end up passing ... | awk '{print }' | ....
Replace
qx/...$7.../
with
qx/...\$7.../
Or if you prefer, you can use
my $shell_cmd = <<'EOS'; # These single-quotes means you get exactly what follows.
bjobs -r -P xenon -W | awk '{print $7}' | grep -v JOB_NAME | cut -f 1 -d ' ' | xargs
EOS
my #dummy_jobs = qx/$shell_cmd/;
Another difference is that qx uses /bin/sh instead of whatever shell you were using, but that shouldn't be relevant here.
for example, I need to use a app called SomeApp, but it often needs to restart, so I need to type "ps -ef | grep SomeApp" and then "kill -9 7777"
which first find the process id and then stop that process:
XXXX:~ XXXX$ ps -ef | grep SomeApp
333 7777 1 0 1:40PM ?? 0:40.31 /Users/XXXX/SomeApp
333 8888 9999 0 1:58PM abcd000 0:00.00 grep SomeApp
XXXX:~ XXXX$ kill -9 7777
now I want to put the command into .sh, but I have something don't know how to write in .sh:
exclude the result that belongs to my grep action
get the correct line result
get the second argument (process id) of result string
can anyone help?
This'll do it.
ps -ef | grep 'SomeApp' | grep -v grep | awk '{print $2}' | xargs kill
Or look at pgrep and pkill depending on the OS.
Try as I might I cannot kill these celery workers.
I run:
celery --app=my_app._celery:app status
I see I have 3 (I don't understand why 3 workers = 2 nodes, please explain if you know)
celery#ip-x-x-x-x: OK
celery#ip-x-x-x-x: OK
celery#named-worker.%ip-x-x-x-x: OK
2 nodes online.
I run (as root):
ps auxww | grep 'celery#ip-x-x-x-x' | awk '{print $2}' | xargs kill -9
The workers just keep reappearing with a new PID.
Please help me kill them.
A process whose pid keeps changing is called comet. Even though pid of this process keeps on changing, its process group ID remains constant. So you can kill by sending a signal.
ps axjf | grep '[c]elery' | awk '{print $3}' | xargs kill -9
Alternatively, you can also kill with pkill
pkill -f celery
This kills all processes with fullname celery.
Reference: killing a process
pkill -f celery
Run from the command line, this will kill at processes related to celery.
In your console, type :
ps -aux | grep celery
I get :
simon 24615 3.8 0.6 344276 219604 pts/3 S+ 22:53 0:56 /usr/bin/python3 /home/simon/.local/bin/celery -A worker_us_task worker -l info -Q us_queue --concurrency=30 -n us_worker#%h
select what you find after -A and type :
pkill -9 -f 'worker_us_task worker'
I always use:
ps auxww | grep 'celery' | awk '{print $2}' | xargs kill -9
If you're using supervisord to run celery, you need to kill supervisord process also.
How do I execute the kill -9 in this perl one liner? I have gotten down to where I have the pids listed and can print it out to a file, like so:
ps -ef | grep -v grep |grep /back/mysql | perl -lane '{print "kill -9 $F[1]"}'
Have you considered pkill or pgrep?
pkill /back/mysql
or
pgrep /back/mysql | xargs kill -9
OK, heavily edited from my original answer.
First, the straightforward answer:
ps -ef | grep -v grep |grep /back/mysql | perl -lane 'kill 9, $F[1]'
Done.
But grep | grep | perl is kind of a silly way to do that. My initial reaction is "Why do you need Perl?" I would normally do it with awk | kill, saving Perl for more complicated problems that justify the extra typing:
ps -ef | awk '/\/back\/mysql/ {print $2}' | xargs kill -9
(Note that the awk won't find itself because the string "\/back\/mysql" doesn't match the pattern /\/back\/mysql/)
You can of course use Perl in place of awk:
ps -ef | perl -lane 'print $F[1] if /\/back\/mysql/' | xargs kill -9
(I deliberately used leaning toothpicks instead of a different delimiter so the process wouldn't find itself, as in the awk case.)
The question then switches from "Why do you need perl?" to "Why do you need grep/awk/kill?":
ps -ef | perl -lane 'kill 9, $F[1] if /\/back\/mysql/'
Let's use a more appropriate ps command, for starters.
ps -e -o pid,cmd --no-headers |
perl -lane'kill(KILL => $F[0]) if $F[1] eq "/back/mysql";'
ps -ef | grep -v grep |grep /back/mysql | perl -lane 'kill(9, $F[1])'
The kill function is available in Perl.
You could omit the two grep commands too:
ps -ef | perl -lane 'kill(9, $F[1]) if m%/back/mysql\b%'
(untested)
Why aren't you using even more Perl?
ps -ef | perl -ane 'kill 9,$F[1] if m{/back/mysql}'
I have a script2:
# This is script2 that is called by script1.
CURRENT_TOMCAT_PROCESS=`ps -ef | grep java | grep $TOMCAT_USER | grep -v grep | awk '{print $2}'`
echo "---> $CURRENT_TOMCAT_PROCESS"
and I call script2 in script1:
ssh $user#$server 'bash -s' < script2
It works fine. But I'm having trouble make the backtick work in a HERE document:
ssh $user#$server 'bash -s' <<EOF
CURRENT_TOMCAT_PROCESS=`ps -ef | grep java | grep $TOMCAT_USER | grep -v grep | awk '{print \$2}'`
echo "---> $CURRENT_TOMCAT_PROCESS"
EOF
(If I don't assign it to a variable and just print it out it works fine, but when I try to assign it to CURRENT_TOMCAT_PROCESS variable using backticks, it doesn't work.)
How can I make this work?
Thanks,
===============================================================================
I could make it work the following way. There are lots of escaping involved:
ssh $user#$server 'bash -s' <<EOF
CURRENT_TOMCAT_PROCESS="\`ps -ef | grep java | grep $TOMCAT_USER | grep -v grep | awk '{print \$2}'\`"
echo "---> \$CURRENT_TOMCAT_PROCESS"
EFO
I think it is reasonable to escape, because you want to transfer the '$' to remote site. You seems make a typo on your last result. I tried to type here again
TOMCATE_USER=foo
ssh $user#$server 'bash -s' <<EOF
CURRENT_TOMCAT_PROCESS="\`ps -ef | grep java | grep $TOMCAT_USER | grep -v grep | awk '{print \$2}'\`"
echo "---> \$CURRENT_TOMCAT_PROCESS"
EOF