My syntax is
my $pstree = `pstree -p $pid|wc`;
but i am getting an error.
sh: -c: line 1: syntax error near unexpected token `|'
any thoughts?
Your variable $pid isn't just a number; it probably has a trailing newline character.
See it with:
use Data::Dumper;
print Data::Dumper->new([$pid])->Terse(1)->Useqq(1)->Dump;
It's valid perl, your shell is what is complaining. Did you put the #!/bin/perl at the top of the script? It's probably being interpreted by bash, not perl.
host:/var/tmp root# ./try.pl
5992 zsched
6875 /usr/local/sbin/sshd -f /usr/local/etc/sshd_config
3691 /usr/local/sbin/sshd -f /usr/local/etc/sshd_config -R
3711 -tcsh
6084 top 60
===
5 16 175
host:/var/tmp root# cat try.pl
#!/bin/perl
my $pstree = `ptree 3691`;
my $wc = `ptree 3691 | wc`;
print STDOUT $pstree;
print STDOUT "===\n";
print STDOUT $wc;
Instead of using the shell to do your counting, you can use Perl, which saves you a process and some complexity in your shell command:
my $count = () = qx(pstree -p $pid);
qx() does the same thing as backticks. The empty parentheses puts the qx() in list context, which makes it return a list, which then in scalar context is the size. It is a shortcut for:
my #list = qx(pstree -p $pid);
my $count = #list;
Related
I want to count the lines in a file and print a string which depends on the line number. But my while loop misses the first line. I believe the while (<>) construct is necessary to increment the $n variable; anyway, is not this construct pretty standard in perl?
How do I get the while loop to print the first line? Or should I not be using while?
> printf '%s\n%s\n' dog cat
dog
cat
> printf '%s\n%s\n' dog cat | perl -n -e 'use strict; use warnings; print; '
dog
cat
> printf '%s\n%s\n' dog cat | perl -n -e 'use strict; use warnings; while (<>) { print; } '
cat
>
> printf '%s\n%s\n' dog cat | perl -n -e 'use strict; use warnings; my $n=0; while (<>) { $n++; print "$n:"; print; } '
1:cat
The man perlrun shows:
-n causes Perl to assume the following loop around your program, which makes it iterate over filename
arguments somewhat like sed -n or awk:
LINE:
while (<>) {
... # your program goes here
}
Note that the lines are not printed by default. See "-p" to have lines printed. If a file named by an
argument cannot be opened for some reason, Perl warns you about it and moves on to the next file.
Also note that "<>" passes command line arguments to "open" in perlfunc, which doesn't necessarily
interpret them as file names. See perlop for possible security implications.
...
...
"BEGIN" and "END" blocks may be used to capture control before or after the implicit program loop, just as in awk.
So, in fact you running this script
LINE:
while (<>) {
# your progrem start
use strict;
use warnings;
my $n=0;
while (<>) {
$n++;
print "$n:";
print;
}
# end
}
Solution, just remove the -n.
printf '%s\n%s\n' dog cat | perl -e 'use strict; use warnings; my $n=0; while (<>) { $n++; print "$n:"; print; }'
Will print:
1:dog
2:cat
or
printf '%s\n%s\n' dog cat | perl -ne 'print ++$n, ":$_"'
with the same result
or
printf '%s\n%s\n' dog cat | perl -pe '++$n;s/^/$n:/'
but the ikegami's solution
printf "one\ntwo\n" | perl -ne 'print "$.:$_"'
is the BEST
There's a way to figure out what your one-liner is actually doing. The B::Deparse module has a way to show you how perl interpreted your source code. It's actually from the O (capital letter O, not zero) namespace that you can load with -M (ikegami explains this on Perlmonks):
$ perl -MO=Deparse -ne 'while(<>){print}' foo bar
LINE: while (defined($_ = readline ARGV)) {
while (defined($_ = readline ARGV)) {
print $_;
}
-e syntax OK
Heh, googling for the module link shows I wrote about this for The Effective Perler. Same example. I guess I'm not that original.
If you can't change the command line, perhaps because it's in the middle of a big script or something, you can set options in PERL5OPT. Then those options last for just the session. I hate changing the original scripts because it seems that no matter how careful I am, I mess up something (how many times has my brain told me "hey dummy, you know what a git branch is, so you should have used that first"):
$ export PERL5OPT='-MO=Deparse'
I have a file which contains each users userid and password. I need to fetch userid and password from that file by passing userid as an search element using awk command.
user101,smith,smith#123
user102,jones,passj#007
user103,albert,albpass#01
I am using a awk command inside my perl script like this:
...
...
my $userid = ARGV[0];
my $user_report_file = "report_file.txt";
my $data = `awk -F, '$1 ~ /$userid/ {print $2, $3}' $user_report_file`;
my ($user,$pw) = split(" ",$data);
...
...
Here I am getting the error:
awk: ~ /user101/ {print , }
awk: ^ syntax error
But if I run same command in terminal window its able to give result like below:
$] awk -F, '$1 ~ /user101/ {print $2, $3}' report_file.txt
smith smith#123
What could be the issue here?
The backticks are a double-quoted context, so you need to escape any literal $ that you want awk to interpret.
my $data = `awk -F, '\$1 ~ /$userid/ {print \$2, \$3}' $user_report_file`;
If you don't do that, you're interpolating the capture variables from the last successful Perl match.
When I have these sorts of problems, I try the command as a string first to see if it is what I expect:
my $data = "awk -F, '\$1 ~ /$userid/ {print \$2, \$3}' $user_report_file";
say $data;
Here's the Perl equivalent of that command:
$ perl -aF, -e '$F[0]=~/101/ && print "#F[1,2]"' report_file
But, this is something you probably want to do in Perl instead of creating another process:
Interpolating data into external commands can go wrong, such as a filename that is foo.txt; rm -rf /.
The awk you run is the first one in the path, so someone can make that a completely different program (so use the full path, like /usr/bin/awk).
Taint checking can tell you when you are passing unsanitized data to the shell.
Inside a program you don't get all the shortcuts, but if this is the part of your program that is slow, you probably want to rethink how you are accessing this data because scanning the entire file with any tool isn't going to be that fast:
open my $fh, '<', $user_report_file or die;
while( <$fh> ) {
chomp;
my #F = split /,/;
next unless $F[0] =~ /\Q$userid/;
print "#F[1,2]";
last; # if you only want the first one
}
I was trying out a sample program that can calculate the size of the file given in command line arguments. It gives the size correctly when I have a file name stored inside a variable, but doesn't output a result when got the filename from the command line arguments.
#! /usr/bin/perl
use File::stat;
while(<>){
if(($_ cmp "\n") == 0){
exit 0;
}
else{
my $file_size = stat($_)->size; # $filesize = s $_;
print $file_size;
}
}
I get no output when using file test operator -s and I get errors when using stat module:
Unsuccessful stat on filename containing newline at /usr/share/perl/5.10/File/stat.pm line 49, <> line 1.
Can't call method "size" on an undefined value at 2.pl line 17, <> line 1.
1.txt is the filename I'm giving as an input.
#!/usr/bin/perl
for (#ARGV){
my $file_size = -s $_;
print $file_size;
}
or similar cmd oneliner,
perl -E 'say "$_, size: ", -s for #ARGV' *
#!/usr/bin/perl -w
$filename = '/path/to/your/file.doc';
$filesize = -s $filename;
print $filesize;
Simple enough, right? First you create a string that contains the path to the file that you want to test, then you use the -s File Test Operator on it. You could easily shorten this to one line using simply:
print -s '/path/to/your/file.doc';
Also, keep in mind that this will always return true if a file is larger than zero bytes, but will be false if the file size is zero. It makes a handy and quick way to check for zero byte files.
How I can make the following external command within ticks work with variables instead?
Or something similar?
sed -i.bak -e '10,16d;17d' $docname; (this works)
I.e., sed -i.bak -e '$line_number,$line_end_number;$last_line' $docname;
my $result =
qx/sed -i.bak -e "$line_number,${line_end_number}d;${last_line}d" $docname/;
Where the line split avoid the horizontal scroll-bar on SO; otherwise, it would be on one line.
Or, since it is not clear that there's any output to capture:
system "sed -i.back '$line_number,${line_end_number}d;${last_line}d' $docname";
Or you could split that up into arguments yourself:
system "sed", "-i.back", "$line_number,${line_end_number}d;${last_line}d", "$docname";
This tends to be safer since the shell doesn't get a chance to interfere with the interpretation of the arguments.
#args = ("command", "arg1", "arg2");
system(#args) == 0 or die "system #args failed: $?"
Furthermore on the manual:
perldoc -f system
I think you should read up on using qq for strings.
You probably want something like this:
use strict;
use warnings;
my $line_number = qq|10|;
my $line_end_number = qq|16d|;
my $last_line = qq|17d|;
my $doc_name = qq|somefile.bak|;
my $sed_command = qq|sed -i.bak -e '$line_number,$line_end_number;$last_line' $doc_name;|;
print $sed_command;
qx|$sed_command|;
I noticed that when I use backticks in perl the commands are executed using sh, not bash, giving me some problems.
How can I change that behavior so perl will use bash?
PS. The command that I'm trying to run is:
paste filename <(cut -d \" \" -f 2 filename2 | grep -v mean) >> filename3
The "system shell" is not generally mutable. See perldoc -f exec:
If there is more than one argument in LIST, or if LIST is an array with more than one value, calls execvp(3) with the arguments in LIST. If
there is only one scalar argument or an array with one element in it, the argument is checked for shell metacharacters, and if there are any, the
entire argument is passed to the system's command shell for parsing (this is "/bin/sh -c" on Unix platforms, but varies on other platforms).
If you really need bash to perform a particular task, consider calling it explicitly:
my $result = `/usr/bin/bash command arguments`;
or even:
open my $bash_handle, '| /usr/bin/bash' or die "Cannot open bash: $!";
print $bash_handle 'command arguments';
You could also put your bash commands into a .sh file and invoke that directly:
my $result = `/usr/bin/bash script.pl`;
Try
`bash -c \"your command with args\"`
I am fairly sure the argument of -c is interpreted the way bash interprets its command line. The trick is to protect it from sh - that's what quotes are for.
This example works for me:
$ perl -e 'print `/bin/bash -c "echo <(pwd)"`'
/dev/fd/63
To deal with running bash and nested quotes, this article provides the best solution: How can I use bash syntax in Perl's system()?
my #args = ( "bash", "-c", "diff <(ls -l) <(ls -al)" );
system(#args);
I thought perl would honor the $SHELL variable, but then it occurred to me that its behavior might actually depend on your system's exec implementation. In mine, it seems that exec
will execute the shell
(/bin/sh) with the path of the
file as its first argument.
You can always do qw/bash your-command/, no?
Create a perl subroutine:
sub bash { return `cat << 'EOF' | /bin/bash\n$_[0]\nEOF\n`; }
And use it like below:
my $bash_cmd = 'paste filename <(cut -d " " -f 2 filename2 | grep -v mean) >> filename3';
print &bash($bash_cmd);
Or use perl here-doc for multi-line commands:
$bash_cmd = <<'EOF';
for (( i = 0; i < 10; i++ )); do
echo "${i}"
done
EOF
print &bash($bash_cmd);
I like to make some function btck (which integrates error checking) and bash_btck (which uses bash):
use Carp;
sub btck ($)
{
# Like backticks but the error check and chomp() are integrated
my $cmd = shift;
my $result = `$cmd`;
$? == 0 or confess "backtick command '$cmd' returned non-zero";
chomp($result);
return $result;
}
sub bash_btck ($)
{
# Like backticks but use bash and the error check and chomp() are
# integrated
my $cmd = shift;
my $sqpc = $cmd; # Single-Quote-Protected Command
$sqpc =~ s/'/'"'"'/g;
my $bc = "bash -c '$sqpc'";
return btck($bc);
}
One of the reasons I like to use bash is for safe pipe behavior:
sub safe_btck ($)
{
return bash_btck('set -o pipefail && '.shift);
}