How to get linux process id alone using perl - perl

When i execute the below command in command prompt it works fine. but when i include the same in perl script, it shows the whole process name.
ps -ef | grep truecontrol | awk '{print$2}'
returns
4567
3456
When I execute it throught perl, it shows the whole process details. I want to assign it to a variable array and work on it. Let me know how to do it?
my $process_chk_command = `ps -ef | grep truecontrol | awk '{print$2}'`;
print($process_chk_command);
root 9902 9890 0 05:50 ? 00:00:03 /opt/abc/jre/bin/java -DTCFTP=1 -d64 -Xms16m -Xmx64m -Djava.library.path=/opt/abc/server/ext/wrapper/lib -cla

perl's backticks and qx// interpolate variables, so when you write:
my $process_chk_command = `ps -ef | grep truecontrol | awk '{print $2}'`;
perl interpolates the special variable $2. In your case, $2 is not set, and thus expands to the empty string, so the awk command is simply {print}.
You could escape the dollar sign (`ps ... | awk '{print \$2}'`) to avoid this.
(As an aside, I'd recommend grep [t]ruecontrol to prevent grep from matching its own process table entry, or that of its parent shell which constructs the pipeline. sh aficionados with a POSIX bent might additionally suggest `ps -eo pid,comm,args | awk '/[t]ruecontrol/{print \$1}'`.)

Try using pgrep
my $process_chk_command = `pgrep truecontrol`;

my $process_chk_command = `ps -ef | grep truecontrol`;
my (undef,$pid) = split(' ', $process_chk_command, -1);
ps, there's a perl utility that converts awk scripts to perl: a2p

True to Perl's motto, another way:
my $pcc=`killall -s truecontrol`;
my (undef,undef,$pid)=split(' ',$pcc);
print $pid;

Related

using here document and pipeline of sh at the same time

I'm using here document of sh to run some commands. now I want to parse the output of those commands using awk. However, everytime I execute it, I get the output of the command append with something like this "% No such child process"
This is how my script looks like.
#!/bin/sh
com = "sudo -u username /path/of/file -l"
$com <<EOF | awk '{print $0}'
Commands.
.
.
.
EOF
How am I going to use heredoc and pipeline without appending that unwanted string?
Thanks
Your variable assignment is wrong in a couple of ways. First, you aren't actually assigning a variable; you're trying to run a command named com whose arguments are = and a string "sudo ...". Spaces must not be used on either side of the =:
com="sudo ..."
Second, command lines should not be stored in a variable; the shell's parser can only make that work they way you intend for very simple commands. Type the command out in full, or use a shell function.
com () {
sudo -u username /path/to/file -l
}
com <<EOF | awk '{print $0}'
...
EOF
There's no problem, check :
$ cat <<EOF | awk '{print $1}'
a b c
1 2 3
EOF
a
1

grep lines matching the pattern using perl programming

i want to grep pattern from a large file. But using grep it is very slow and pattern to be grep is also case insensitive. So , i read that perl is faster for file reading. Someone please tel me a way to do it in perl.
Thanks.
cat File1.txt File2.txt | grep -in "exception" | grep -v "pattern"
In perl:
perl -ne 'print "$.:$_" if /exception/i and !/pattern/' File1.txt File2.txt
or without cat:
grep -in "exception" File1.txt File2.txt | grep -v "pattern"
I am doubtful that Perl would give you a performance advantage in this area. The grep utility is a pre-compiled binary written in C; Perl is an interpreted language and bears an extra performance overhead which e.g. GNU grep does not. If grep is slow the bottleneck is most likely the file being loaded from disk into main memory. How big is the file?
FYI, grep has flags which enable case-insensitive matching and Perl-style regex syntax.
% grep -i 'abc' <file> # Matches abc, ABC, aBc, etc.
% grep -i 'ab\|cd' <file> # Matches ab or cd
% grep -P 'ab|cd' <file> # Matches ab or cd
An equivalent Perl program is:
# grep.pl
$pat = shift;
while(<>) { if(/$pat/i) { print; } }
which can be invoked as
perl grep.pl abc file1.txt file2.txt ...
My advice, stick with grep.
grep -i 'pattern' file->perl -lne 'print if(/pattern/i)' file
grep -vi 'pattern' file->perl -lne 'print unless(/pattern/i)' file
If you want everything with line numebrs as well, the replace the print
with print "$. $_" in the above commands

Perl wget quotes syntax issue

thank you for reading.
For a shell command to wget, something like this works:
wget -q -O - http://www.myweb.com | grep -oe '\w*.\w*#\w*.\w*.\w\+' | sort -u
However, when I try to insert that command inside the Perl program, then I get a syntax error referring to "backslashes found where operator expected, bareword found where operator expected". So I replaced the quotes that surround the regex by this {} but, what that does is just like commenting it out, it does not bring the error, but it is as if the regex weren't, so obviously the curly braces are a wrong attempt.
This is the code, it is inside a foreach:
foreach(#my_array) {
$browser->get($_);
# and here below is where the error comes
system ('wget -q -O -"$_" | grep -oe '\w*.\w*#.\w*.\w\+' | sort -u');
If I replace the single quotes wrapping the regex by {}, then wget does get the URLs but the grep command does not act.
So that is the issue, how to resolve the quotes annoying the syntax
You are using single-quotes ' in your system call. They do not fill in variables for you. The $_ is not getting replaced. Also, the single quotes next to the grep make this invalid syntax.
Try this instead:
system ("wget -q -O - $_ | grep -oe '\w*.\w*\#.\w*.\w\+' | sort -u");
You can also use the qq operator:
system ( qq( wget -q -O - $_ | grep -oe '\w*.\w*\#.\w*.\w\+' | sort -u) );
Also, look at perlop.
Another thought: If you have $browser object that can get() the url, why do you need to use wget? You could also do this in Perl.
You want this:
system ("wget -q -O -\"$_\" | grep -oe '\\w*.\\w*#.\\w*.\\w\\+' | sort -u");
You can include what you like within double quotes, only you have to escape certain characters.
Incidentally, Perl's qq() operator might interest you. You can look it up.

Assign variable from perl to csh

I have a csh script, my goal is to read an ini config file with perl module Config::Simple. I want to execute my perl command and assign the result to one variable.
perl -MConfig::Simple -e '$cfg = new Config::Simple("../config.ini"); $myvar = $cfg->param("myvar");'
What is the syntax ?
Receive the script's return value into a variable? I con't know the csh syntax, but in bash that is:
myvar=`perl ....`;
But if you wanted to set several variables, not sure.
For setting several variables you could have the perl script print csh syntax that the shell would evaluate.
I don't know csh but in bash it should be done like
#!/bin/sh
eval `perl -E 'say "FOO=123"; say "BAR=456"'`
echo "FOO is $FOO"
Command substitution in csh looks like this:
#!/bin/csh
set VAR=`perl -E 'say q(hello world)'`
echo ${VAR}
And, as an aside, I hope your using a descendent of csh like tcsh. The original csh implementation is a brain-dead mangled shell. This classic paper describes why.
I had a similar requirement, where I was using python instead of perl.
my python script was producing 3 output strings and the values were needed to be set in csh as variable.
Here's a solution.. hope it will help!
Inside Python:
I was using three variables : x , y and z I had only
one print statement in python which printed: x,y,z
Inside CSH:
set py_opt = `./my_python_script`
set csh_x = `echo $py_opt | sed 's/,/ /g' | awk '{print $1}'`
set csh_y = `echo $py_opt | sed 's/,/ /g' | awk '{print $2}'`
set csh_z = `echo $py_opt | sed 's/,/ /g' | awk '{print $3}'`

perl - one liner to loop through and kill pids

How do I execute the kill -9 in this perl one liner? I have gotten down to where I have the pids listed and can print it out to a file, like so:
ps -ef | grep -v grep |grep /back/mysql | perl -lane '{print "kill -9 $F[1]"}'
Have you considered pkill or pgrep?
pkill /back/mysql
or
pgrep /back/mysql | xargs kill -9
OK, heavily edited from my original answer.
First, the straightforward answer:
ps -ef | grep -v grep |grep /back/mysql | perl -lane 'kill 9, $F[1]'
Done.
But grep | grep | perl is kind of a silly way to do that. My initial reaction is "Why do you need Perl?" I would normally do it with awk | kill, saving Perl for more complicated problems that justify the extra typing:
ps -ef | awk '/\/back\/mysql/ {print $2}' | xargs kill -9
(Note that the awk won't find itself because the string "\/back\/mysql" doesn't match the pattern /\/back\/mysql/)
You can of course use Perl in place of awk:
ps -ef | perl -lane 'print $F[1] if /\/back\/mysql/' | xargs kill -9
(I deliberately used leaning toothpicks instead of a different delimiter so the process wouldn't find itself, as in the awk case.)
The question then switches from "Why do you need perl?" to "Why do you need grep/awk/kill?":
ps -ef | perl -lane 'kill 9, $F[1] if /\/back\/mysql/'
Let's use a more appropriate ps command, for starters.
ps -e -o pid,cmd --no-headers |
perl -lane'kill(KILL => $F[0]) if $F[1] eq "/back/mysql";'
ps -ef | grep -v grep |grep /back/mysql | perl -lane 'kill(9, $F[1])'
The kill function is available in Perl.
You could omit the two grep commands too:
ps -ef | perl -lane 'kill(9, $F[1]) if m%/back/mysql\b%'
(untested)
Why aren't you using even more Perl?
ps -ef | perl -ane 'kill 9,$F[1] if m{/back/mysql}'