I have a script2:
# This is script2 that is called by script1.
CURRENT_TOMCAT_PROCESS=`ps -ef | grep java | grep $TOMCAT_USER | grep -v grep | awk '{print $2}'`
echo "---> $CURRENT_TOMCAT_PROCESS"
and I call script2 in script1:
ssh $user#$server 'bash -s' < script2
It works fine. But I'm having trouble make the backtick work in a HERE document:
ssh $user#$server 'bash -s' <<EOF
CURRENT_TOMCAT_PROCESS=`ps -ef | grep java | grep $TOMCAT_USER | grep -v grep | awk '{print \$2}'`
echo "---> $CURRENT_TOMCAT_PROCESS"
EOF
(If I don't assign it to a variable and just print it out it works fine, but when I try to assign it to CURRENT_TOMCAT_PROCESS variable using backticks, it doesn't work.)
How can I make this work?
Thanks,
===============================================================================
I could make it work the following way. There are lots of escaping involved:
ssh $user#$server 'bash -s' <<EOF
CURRENT_TOMCAT_PROCESS="\`ps -ef | grep java | grep $TOMCAT_USER | grep -v grep | awk '{print \$2}'\`"
echo "---> \$CURRENT_TOMCAT_PROCESS"
EFO
I think it is reasonable to escape, because you want to transfer the '$' to remote site. You seems make a typo on your last result. I tried to type here again
TOMCATE_USER=foo
ssh $user#$server 'bash -s' <<EOF
CURRENT_TOMCAT_PROCESS="\`ps -ef | grep java | grep $TOMCAT_USER | grep -v grep | awk '{print \$2}'\`"
echo "---> \$CURRENT_TOMCAT_PROCESS"
EOF
Related
When I run this command:
bjobs -r -P xenon -W | awk '{print $7}' | grep -v JOB_NAME |
cut -f 1 -d ' ' | xargs
in a terminal, all running JOB_NAMES are coming, but when I do this in per_script only JOB_ID are coming.
Perl script code is below:
#dummy_jobs = qx/bjobs -r -P xenon -W | awk '{print $7}' | grep -v JOB_NAME | cut -f 1 -d ' ' | xargs/;
What needs to be changed in Perl?
qx/.../ literals are very much like double-quoted strings. Specifically, $7 is interpolated, so you end up passing ... | awk '{print }' | ....
Replace
qx/...$7.../
with
qx/...\$7.../
Or if you prefer, you can use
my $shell_cmd = <<'EOS'; # These single-quotes means you get exactly what follows.
bjobs -r -P xenon -W | awk '{print $7}' | grep -v JOB_NAME | cut -f 1 -d ' ' | xargs
EOS
my #dummy_jobs = qx/$shell_cmd/;
Another difference is that qx uses /bin/sh instead of whatever shell you were using, but that shouldn't be relevant here.
Unexpectedly, this fails (no output; tried in sh, zsh, bash):
echo "foo\nplayed\nbar" > /tmp/t && tail -f /tmp/t | grep played | sed 's#pl#st#g'
Note that two times grep also fails, indicating that it's quite irrelevant which commands are used:
# echo -e "foo\nplayed\nbar" > /tmp/t && tail -f /tmp/t | grep played | grep played
grep alone works:
# echo -e "foo\nplayed\nbar" > /tmp/t && tail -f /tmp/t | grep played
played
sed alone works:
# echo -e "foo\nplayed\nbar" > /tmp/t && tail -f /tmp/t | sed 's#pl#st#g'`
foo
stayed
bar
With cat instead of tail, it works:
# echo -e "foo\nplayed\nbar" > /tmp/t && cat /tmp/t | grep played | sed 's#pl#st#g'
stayed
With journalctl --follow, it fails just like with tail.
What's the reason for being unable to pipe twice?
It's a buffering issue - the first grep buffers it's output when it's piping to another command but not if it's printing to stdout. See http://mywiki.wooledge.org/BashFAQ/009 for additional info.
$ echo '' | sed -e '$a\new content' | cat -n
1
2 new content
but if we don't put new-line, there's no output at all:
$ echo -n '' | sed -e '$a\new content' | cat -n
$
Another questions (even more important) are: Can it be worked around? How?
According to
Eric Pement, this is
not possible with sed.
However awk can do this easily
$ printf '' | awk '{print} END {print "new content"}'
new content
I'm using Cerely to manage delayed task on my django project.
I got problem when I tried to shutdown celery as suggested in the manual.
>> ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9
kill: No such process
>> ps auxww | grep 'celery worker' | awk '{print $2}
28630
>> ps auxww | grep 'celery worker' | awk '{print $2}
28633
PID continuosly changes and it makes hard to send killing signal.
How can I solve this problem? Thanks in advance.
[ Update ]
django settings.py
import djcelery
djcelery.setup_loader()
BROKER_URL = 'amqp://guest:guest#localhost:5672/' # Using RabbitMQ
CELERYD_MAX_TASKS_PER_CHILD = 1
PID check (After reboot)
>> ps auxww | grep 'celery worker' | awk '{print $2}'
3243
>> manage.py celery worker --loglevel=info
celery#{some id value}.... ready
>> ps auxww | grep 'celery worker' | awk '{print $2}'
3285
3293
3296
>> ps auxww | grep 'celery worker' | awk '{print $2}' | xargs kill -9
kill: No such process
>> ps auxww | grep 'celery worker' | awk '{print $2}'
3321
>> ps auxww | grep 'celery worker' | awk '{print $2}'
3324
Question
At least, one celery worker remains even though rebooted. And its PID changes continuously.
Celery daemon executes two workers at once. How can I fix it to only one worker ?
It is doing precisely what you asked for with the CELERYD_MAX_TASKS_PER_CHILD setting:
Maximum number of tasks a pool worker process can execute before it’s replaced with a new one.
Apparently you wanted to run one worker process, but that is controlled by a different setting, namely CELERYD_CONCURRENCY.
So, replace
CELERYD_MAX_TASKS_PER_CHILD = 1
with
CELERYD_CONCURRENCY = 1
How do I execute the kill -9 in this perl one liner? I have gotten down to where I have the pids listed and can print it out to a file, like so:
ps -ef | grep -v grep |grep /back/mysql | perl -lane '{print "kill -9 $F[1]"}'
Have you considered pkill or pgrep?
pkill /back/mysql
or
pgrep /back/mysql | xargs kill -9
OK, heavily edited from my original answer.
First, the straightforward answer:
ps -ef | grep -v grep |grep /back/mysql | perl -lane 'kill 9, $F[1]'
Done.
But grep | grep | perl is kind of a silly way to do that. My initial reaction is "Why do you need Perl?" I would normally do it with awk | kill, saving Perl for more complicated problems that justify the extra typing:
ps -ef | awk '/\/back\/mysql/ {print $2}' | xargs kill -9
(Note that the awk won't find itself because the string "\/back\/mysql" doesn't match the pattern /\/back\/mysql/)
You can of course use Perl in place of awk:
ps -ef | perl -lane 'print $F[1] if /\/back\/mysql/' | xargs kill -9
(I deliberately used leaning toothpicks instead of a different delimiter so the process wouldn't find itself, as in the awk case.)
The question then switches from "Why do you need perl?" to "Why do you need grep/awk/kill?":
ps -ef | perl -lane 'kill 9, $F[1] if /\/back\/mysql/'
Let's use a more appropriate ps command, for starters.
ps -e -o pid,cmd --no-headers |
perl -lane'kill(KILL => $F[0]) if $F[1] eq "/back/mysql";'
ps -ef | grep -v grep |grep /back/mysql | perl -lane 'kill(9, $F[1])'
The kill function is available in Perl.
You could omit the two grep commands too:
ps -ef | perl -lane 'kill(9, $F[1]) if m%/back/mysql\b%'
(untested)
Why aren't you using even more Perl?
ps -ef | perl -ane 'kill 9,$F[1] if m{/back/mysql}'