Perl backticks: flags do not exist - perl

When executing the following segment of code,
sub list {
my($self)=#_;
my $file = $self->{P_Dir}."/".$self->{Name};
print `ls –l $file`;
}
I get this error:
ls: cannot access –l: No such file or directory
I am not really sure what is causing that, since if I manually type ls -l into the command line, I do not see that error.

That – that you've thankfully copy & pasted is a Unicode en dash character (U+2013) and not the ASCII hyphen character - (U+002D).

Hmmm... It works for me though:
$ cat test.pl
#!/usr/bin/perl -w
use strict;
my $file = "rpm.pl";
print `ls -l $file`;
$ perl test.pl
-rw-r--r-- 1 dheeraj dheeraj 922 2012-10-22 19:56 rpm.pl

Related

Perl only executes part of shell command and emits warning 'redundant argument in printf' in command substitution

The following has been tested with perl 5.24 on OS X 10.11.5.
I wrote a short program (perl-embed.pl) to determine whether perl escapes shell metacharacters when interpolating strings into backticks (it doesn't).
use strict;
use warnings;
my $bar = '" ; echo 45 ; "';
printf "%s\n", `echo "hi${bar}ls"`;
I was very surprised to see that this generated a warning and only executed part of the command.
$ perl perl-embed.pl
Redundant argument in printf at perl-embed.pl line 6.
hi
For comparison the following program (perl-embed2.pl) with print instead of printf runs without warnings.
use strict;
use warnings;
my $bar = '" ; echo 45 ; "';
print `echo "hi${bar}ls"`;
I then ran it.
$ perl perl-embed2.pl
hi
45
<contents of current working directory>
perl-embed.pl's behavior is totally unexpected. printf interpolates the contents of strings just fine in other contexts, even if the string contains weird characters.
$ perl -Mstrict -Mwarnings -e 'printf "%s\n", q[5]'
5
$ perl -Mstrict -Mwarnings -e 'printf "%s\n", q["]'
"
The system perl (version 5.18) does not emit this warning, but seems not to execute ls or echo 45 like we would expect
$ /usr/bin/perl perl-embed.pl
hi
$ /usr/bin/perl perl-embed2.pl
hi
45
<contents of current directory>
Why is perl behaving this way? Note that in every case perl is exiting normally.
You are using backticks in list context, so the expression
`echo "hi${bar}ls`
will run the command
`echo "hi"; echo 45; ls`
and return each line of output in a separate element, for example
( "hi",
"45",
"foo",
... # other files in current directory
)
But the template in printf ("%s\n") only has one placeholder, so printf gets confused and issues the warning, just as if you said
perl -we 'printf "%d\n", 1, 2, 3, 4'

how to use system command 'grep' in perl script

I am trying to count the matching character using grep command in Perl script. Below script is counting whole directory, my desired output should contain only the count of input file not the whole directory, some one help me to do so.
#! use/bin/perl
use strict;
print"Enter file name for Unzip\n";
print"File name: ";
chomp(my $Filename=<>);
system("gunzip -r ./$Filename/*\n");
system('grep -c "#SRR" ./$Filename/*');
This is giving whole directory count.
#! use/bin/perl
use strict;
print"Enter file name for Unzip\n";
print"File name: ";
chomp(my $Filename=<>);
system("gunzip -r ./$Filename\*");
system("grep -c '\#SRR' ./$Filename\*");
Please let know if i misunderstood question. But above code gives us number of lines matching #SRR on zipped filename provided.
Also you don't need to unzip to count you can directly do this
system("zgrep -c '\#SRR' $Filename")
instead of
system("gunzip -r ./$Filename\*");
system("grep -c '\#SRR' ./$Filename\*");
my $var=cat filename | grep "your word";
Thanks,
nilesh.

Executing shell command with pipe in perl

I want the output of the shell command captured in variable of a perl script, only the first section of the command before the pipe "|" is getting executed, and there is no error while executing the script
File.txt
Error input.txt got an error while parsing
Info output.txt has no error while parsing
my $var = `grep Error ./File.txt | awk '{print $2}'`;
print "Errored file $var";
Errored file Error input.txt got an error while parsing
I want just the input.txt which gets filtered by awk command but not happening. Please help
The $ in $2 is interpolated by Perl, so the command that the shell receives looks like:
grep Error ./File.txt | awk '{print }'
(or something else if you have recently matched a regular expression with capture groups). The workaround is to escape the dollar sign:
my $var = `grep Error ./File.txt | awk '{print \$2}'`
Always include use strict; and use warnings; in EVERY perl script.
If you had, you'd have gotten the following warning:
Use of uninitialized value $2 in concatenation (.) or string at scratch.pl line 4.
This would've alerted you to the problem in your command, namely that the $2 variable is being interpolated instead of being treated like a literal.
There are three ways to avoid this.
1) You can do what mob suggested and just escape the $2
my $var = `grep Error ./File.txt | awk '{print \$2}'`
2) You can use the qx form of backticks with single quotes so that it doesn't interpolate, although that is less ideal because you are using single quotes inside your command:
my $var = qx'grep Error ./File.txt | awk \'{print $2}\''
3) You can just use a pure perl solution.
use strict;
use warnings;
my ($var) = do {
local #ARGV = 'File.txt';
map {(split ' ')[1]} grep /Error/, <>;
};

File comparison with multiple columns

I am doing a directory cleanup to check for files that are not being used in our testing environment. I have a list of all the file names which are sorted alphabetically in a text file and another file I want to compare against.
Here is how the first file is setup:
test1.pl
test2.pl
test3.pl
It is a simple, one script name per line text file of all the scripts in the directory I want to clean up based on the other file below.
The file I want to compare against is a tab file which lists a script that each server runs as a test and there are obviously many duplicates. I want to strip out the testing script names from this file and compare spit it out to another file, use uniq and sort so that I can diff this file with the above to see which testing scripts are not being used.
The file is setup as such:
server: : test1.pl test2.pl test3.pl test4.sh test5.sh
There are some lines with less and some with more. My first impulse was to make a Perl script to split the line and push the values in an list if they are not there but that seems wholly inefficient. I am not to experienced in awk but I figured there is more than one way to do it. Any other ideas to compare these files?
A Perl solution that makes a %needed hash of the files being used by the servers and then checks against the file containing all the file names.
#!/usr/bin/perl
use strict;
use warnings;
use Inline::Files;
my %needed;
while (<SERVTEST>) {
chomp;
my (undef, #files) = split /\t/;
#needed{ #files } = (1) x #files;
}
while (<TESTFILES>) {
chomp;
if (not $needed{$_}) {
print "Not needed: $_\n";
}
}
__TESTFILES__
test1.pl
test2.pl
test3.pl
test4.pl
test5.pl
__SERVTEST__
server1:: test1.pl test3.pl
server2:: test2.pl test3.pl
__END__
*** prints
C:\Old_Data\perlp>perl t7.pl
Not needed: test4.pl
Not needed: test5.pl
This rearranges filenames to be one per line in second file via awk, then diff the output with the first file.
diff file1 <(awk '{ for (i=3; i<=NF; i++) print $i }' file2 | sort -u)
Quick and dirty script to do the job. If it sounds good, use open to read the files with proper error checking.
use strict;
use warnings;
my #server_lines = `cat server_file`;chomp(#server_lines);
my #test_file_lines = `cat test_file_lines`;chomp(#test_file_lines);
foreach my $server_line (#server_lines){
$server_line =~ s!server: : !!is;
my #files_to_check = split(/\s+/is, $server_line);
foreach my $file_to_check (#files_to_check){
my #found = grep { /$file_to_check/ } #test_file_lines;
if (scalar(#found)==0){
print "$file_to_check is not found in $server_line\n";
}
}
}
If I understand your need correctly you have a file with a list of tests (testfiles.txt):
test1.pl
test2.pl
test3.pl
test4.pl
test5.pl
And a file with a list of servers, with files they all test (serverlist.txt):
server1: : test1.pl test3.pl
server2: : test2.pl test3.pl
(Where I have assumed all spaces as tabs).
If you convert the second file into a list of tested files, you can then compare this using diff to your original file.
cut -d: -f3 serverlist.txt | sed -e 's/^\t//g' | tr '\t' '\n' | sort -u > tested_files.txt
The cut removes the server name and ':', the sed removes the leading tab left behind, tr then converts the remaining tabs into newlines, then we do a unique sort to sort and remove duplicates. This is output to tested_files.txt.
Then all you do is diff testfiles.txt tested_files.txt.
It's hard to tell since you didn't post the expected output but is this what you're looking for?
$ cat file1
test1.pl
test2.pl
test3.pl
$
$ cat file2
server: : test1.pl test2.pl test3.pl test4.sh test5.sh
$
$ gawk -v RS='[[:space:]]+' 'NR==FNR{f[$0]++;next} FNR>2 && !f[$0]' file1 file2
test4.sh
test5.sh

How can I grep for a value from a shell variable?

I've been trying to grep an exact shell 'variable' using word boundaries,
grep "\<$variable\>" file.txt
but haven't managed to; I've tried everything else but haven't succeeded.
Actually I'm invoking grep from a Perl script:
$attrval=`/usr/bin/grep "\<$_[0]\>" $upgradetmpdir/fullConfiguration.txt`
$_[0] and $upgradetmpdir/fullConfiguration.txt contains some matching "text".
But $attrval is empty after the operation.
#OP, you should do that 'grepping' in Perl. don't call system commands unnecessarily unless there is no choice.
$mysearch="pattern";
while (<>){
chomp;
#s = split /\s+/;
foreach my $line (#s){
if ($line eq $mysearch){
print "found: $line\n";
}
}
}
I'm not seeing the problem here:
file.txt:
hello
hi
anotherline
Now,
mala#human ~ $ export GREPVAR="hi"
mala#human ~ $ echo $GREPVAR
hi
mala#human ~ $ grep "\<$GREPVAR\>" file.txt
hi
What exactly isn't working for you?
Not every grep supports the ex(1) / vi(1) word boundary syntax.
I think I would just do:
grep -w "$variable" ...
Using single quotes works for me in tcsh:
grep '<$variable>' file.txt
I am assuming your input file contains the literal string: <$variable>
If variable=foo are you trying to grep for "foo"? If so, it works for me. If you're trying to grep for the variable named "$variable", then change the quotes to single quotes.
On a recent linux it works as expected. Do could try egrep instead
Say you have
$ cat file.txt
This line has $variable
DO NOT PRINT ME! $variableNope
$variable also
Then with the following program
#! /usr/bin/perl -l
use warnings;
use strict;
system("grep", "-P", '\$variable\b', "file.txt") == 0
or warn "$0: grep exited " . ($? >> 8);
you'd get output of
This line has $variable
$variable also
It uses the -P switch to GNU grep that matches Perl regular expressions. The feature is still experimental, so proceed with care.
Also note the use of system LIST that bypasses shell quoting, allowing the program to specify arguments with Perl's quoting rules rather than the shell's.
You could use the -w (or --word-regexp) switch, as in
system("grep", "-w", '\$variable', "file.txt") == 0
or warn "$0: grep exited " . ($? >> 8);
to get the same result.
Using single quote it wont work. You should go for double quote
For example:
this wont work
--------------
for i in 1
do
grep '$i' file
done
this will work
--------------
for i in 1
do
grep "$i" file
done