I have a perl script that needs to check for an empty directory on a remote machine. Using ksh I can get the following shell script to work:
ksh# ssh user#host '[ "$(ls -A /empty/dir/* 2>/dev/null)" ] && echo "1" || echo "0"'
This correctly returns a "0" if the directory is empty or does not exist. It returns a "1" only if the directory contains something.
When I place this line inside of the perl script though like so:
#!/usr/bin/perl
print `ssh user\#host '[ "$(ls -A /empty/dir/* 2>/dev/null)" ] && echo "1" || echo "0"'`
No matter what I put in there it returns a "1", empty directory or not. I've checked env values compared to the normal shell and the perl script and they are the same.
Does anyone have any ideas why this command would return different results only in the perl script?
Both machines are AIX 6.1 with KSH as the default shell.
Text inside backticks is interpolated as if it were inside double quotes before being passed to the OS. Run
print qq`ssh user\#host '[ "$(ls -A /empty/dir/* 2>/dev/null)" ] && echo "1" || echo "0"'`
to see exactly what string is getting passed to the OS. I'll bet you'll at least have to escape the $.
A safer and saner way is to build your command first and run it inside backticks later:
# q{...} does no interpolation
my $cmd = q{ssh user\#host '[ "$(ls -A /empty/dir/* 2>/dev/null)" ] && echo "1" || echo "0"'};
print `$cmd`;
use Net::SFTP::Foreign;
my $s = Net::SFTP::Foreign->new('user#host');
my $empty = 1;
if (my $d = $s->opendir('/empty/dir')) {
if (defined $s->readdir($d)) {
$empty = 0
}
}
Related
I have a shell script (csh) calling Perl like this:
set SHELL_VAR = "foo"
set RES = `perl -e 'my $perl_var = uc("$SHELL_VAR"); print "$perl_var\n"'`
echo $RES
I did not manage to use single or double quotes properly, no matter which combination I tried.
How do I properly mix variables in Perl and shell?
Both are starting with $. Double quotes use variable values, but returns
error perl_var: Undefined variable.
in shell. Enclosing a Perl script by single quotes led to an empty result. Escaping like \$perl_var does not succeed either.
You are trying to insert the value of a shell var into a Perl literal of a program that's in a command line. That's two levels of escaping! Instead, just pass it to Perl as an argument.
% set SHELL_VAR = "foo"
% set RES = `perl -e'print uc($ARGV[0])' "$SHELL_VAR"`
% echo "$RES"
FOO
Doh! It's not quite right.
csh% set SHELL_VAR = foo\"\'\(\ \ \ \$bar
csh% set RES = `perl -e'print uc($ARGV[0])' "$SHELL_VAR"`
csh% echo "$RES"
FOO"'( $BAR
I should get
FOO"'( $BAR
In sh, I'd use RES="` ... `" aka "$( ... "), but I don't know the csh equivalent. If someone could fix the above so multiple spaces are printed, I'd appreciate it!
sh$ shell_var=foo\"\'\(\ \ \ \$bar
sh$ res="$( perl -e'print uc($ARGV[0])' "$shell_var" )"
sh$ echo "$res"
FOO"'( $BAR
You can push your SHELL_VAR into the environment and let Perl pull it from there:
#!/bin/csh
setenv SHELL_VAR "foo"
set RES = `perl -e 'my $perl_var = uc($ENV{SHELL_VAR}); print "$perl_var\n"'`
echo $RES
Note that the setenv command lacks the expicit '=' for assignment. The environmental variable is availab e to Perl from the %ENV hash with the variable name as its key.
I found another solution, you can simply concatenate several expressions, i.e. you can write
set XYZ = "foo"`date`bar$HOST'gugus'
for example. This is a concatenation of
foo + `date` + bar + $HOST + gugus
Thus the following works:
set SHELL_VAR = "foo"
set RES = `perl -e 'my $perl_var = uc("'$SHELL_VAR'"); print "$perl_var\n"'`
echo $RES
Simple question, how can I sudo test if a file exists using Net::OpenSSH? I really tried hard to figure this out, but with no result. Ok, I can use a wrapper script, but that's not what I want.
This is my wrapper sub. It works fine most of the time, but it's not testing for files?!
sub sudo {
my $f = shift;
my $h = shift;
my $cmd = shift;
my #out = $f->{ssh}->capture(
{ stdin_data => ["$f->{pwd}\n", #$h ] },
'/usr/bin/sudo','-Sk','-p','','--',
#{$cmd}
);
return \#out;
}
&sudo($f, [], ['test', '-e', '/user/local/bin/cmd', '&&', 'echo', '1', '||', 'echo', '0']); # <= fails
&sudo($f, [], ['test -e /user/local/bin/cmd && echo 1 || echo 0']); # <= fails too
... # arbitrary other $str/$array combinations ...
I know, I don't have to sudo check a cmd in /usr/local/bin , it's just an example. Of course I also "chomped" the result after &sudo().
such a simple task ... and I'm stuck!
It seems I can't get the command concatenation working: there is an extra argument && warning. The sub always returns undef.
Is this a limitation? Is this supported at all? Using
$f->{ssh}->capture('test -e /user/local/bin/cmd && echo 1 || echo 0');
works fine, which is what I'm using right now. So an additional sudo in front should not be that much of a problem ...
Can anyone help?
The command supplied to sudo works similar like the list form of perl's system() or exec() — that is, without using any shell. So any shell metacharacters like && or || won't work here. To fix this, explicitly use a shell:
sudo($f, [], ['sh', '-c', 'test -e /user/local/bin/cmd && echo 1 || echo 0']);
Alternatively, instead of using capture() you could use Net::OpenSSH's system() instead and just inspect the return value — then there's no need for using && and ||.
I am executing some shell commands via a perl script and capturing output, like this,
$commandOutput = `cat /path/to/file | grep "some text"`;
I also check if the command ran successfully or not like this,
if(!$commandOutput)
{
# command not run!
}
else
{
# further processing
}
This usually works and I get the output correctly. The problem is, in some cases, the command itself does not produce any output. For instance, sometimes the text I am trying to grep will not be present in the target file, so no output will be provided as a result. In this case, my script detects this as "command not run", while its not true.
What is the correct way to differentiate between these 2 cases in perl?
you can use this to know whether the command failed or the command return nothing
$val = `cat text.txt | grep -o '[0-9]*'`;
print "command failed" if (!$?);
print "empty string" if(! length($val) );
print "val = $val";
assume that text.txt contain "123ab" from which you want to get number only.
Use $? to check if the command executed successfully: see backticks do not return any value in perl for an example.
If you're not piping to |grep you can check $? for more specific exit status,
my $commandOutput = `grep "some text" /path/to/file`;
if ($? < 0)
{
# command not run!
}
elsif ($? >> 8 > 1)
{
# file not found
}
else
{
# further processing
}
Is there a neater way of climbing up multiple directory levels from the location of a script.
This is what I currently have.
# get the full path of the script
D=$(cd ${0%/*} && echo $PWD/${0##*/})
D=$(dirname $D)
D=$(dirname $D)
D=$(dirname $D)
# second level parent directory of script
echo $D
I would like a neat way of finding the nth level. Any ideas other than putting in a for loop?
dir="/path/to/somewhere/interesting"
saveIFS=$IFS
IFS='/'
parts=($dir)
IFS=$saveIFS
level=${parts[3]}
echo "$level" # output: somewhere
#!/bin/sh
ancestor() {
local n=${1:-1}
(for ((; n != 0; n--)); do cd $(dirname ${PWD}); done; pwd)
}
Usage:
$ pwd
/home/nix/a/b/c/d/e/f/g
$ ancestor 3
/home/nix/a/b/c/d
A solution without loops would be to use recursion. I wanted to find a config file for a script by traversing backwards up from my current working directory.
rtrav() { test -e $2/$1 && echo $2 || { test $2 != / && rtrav $1 `dirname $2`;}; }
To check if the current directory is in a GIT repo: rtrav .git $PWD
rtrav will check the existence of a filename given by the first argument in each parent folder of the one given as the second argument. Printing the directory path where the file was found or exiting with an error code if the file was not found.
The predicate (test -e $2/$1) could be swapped for checking of a counter that indicates the traversal depth.
If you're OK with including a Perl command:
$ pwd
/u1/myuser/dir3/dir4/dir5/dir6/dir7
The first command lists the directory containing first N (in my case 5) directories
$ perl-e 'use File::Spec; \
my #dirs = File::Spec->splitdir( \
File::Spec->rel2abs( File::Spec->curdir() ) ); \
my #dirs2=#dirs[0..5]; print File::Spec->catdir(#dirs2) . "\n";'
/u1/myuser/dir3/dir4/dir5
The second command lists the directory N levels up (in my case 5) directories (I think you wanted the latter).
$ perl -e 'use File::Spec; \
my #dirs = File::Spec->splitdir( \
File::Spec->rel2abs( File::Spec->curdir() ) ); \
my #dirs2=#dirs[0..$#dir-5]; print File::Spec->catdir(#dirs2)."\n";'
/u1/myuser
To use it in your bash script, of course:
D=$(perl -e 'use File::Spec; \
my #dirs = File::Spec->splitdir( \
File::Spec->rel2abs( File::Spec->curdir() ) ); \
my #dirs2=#dirs[0..$#dir-5]; print File::Spec->catdir(#dirs2)."\n";')
Any ideas other than putting in a for loop?
In shells, you can't avoid the loop, because traditionally they do not support regexp, but glob matching instead. And glob patterns do not support the any sort of repeat counters.
And BTW, simplest way is to do it in shell is: echo $(cd $PWD/../.. && echo $PWD) where the /../.. makes it strip two levels.
With tiny bit of Perl that would be:
perl -e '$ENV{PWD} =~ m#^(.*)(/[^/]+){2}$# && print $1,"\n"'
The {2} in the Perl's regular expression is the number of directory entries to strip. Or making it configurable:
N=2
perl -e '$ENV{PWD} =~ m#^(.*)(/[^/]+){'$N'}$# && print $1,"\n"'
One can also use Perl's split(), join() and splice() for the purpose, e.g.:
perl -e '#a=split("/", $ENV{PWD}); print join("/", splice(#a, 0, -2)),"\n"'
where -2 says that from the path the last two entries has to be removed.
Two levels above the script directory:
echo "$(readlink -f -- "$(dirname -- "$0")/../..")"
All the quoting and -- are to avoid problems with tricky paths.
This method uses the actual full path to the perl script itself ... TIMTOWTDI
You could just easily replace the $RunDir with the path you would like to start with ...
#resolve the run dir where this scripts is placed
$0 =~ m/^(.*)(\\|\/)(.*)\.([a-z]*)/;
$RunDir = $1 ;
#change the \'s to /'s if we are on Windows
$RunDir =~s/\\/\//gi ;
my #DirParts = split ('/' , $RunDir) ;
for (my $count=0; $count < 4; $count++) { pop #DirParts ; }
$confHolder->{'ProductBaseDir'} = $ProductBaseDir ;
This allows you to work your way up until whatever condition is desired
WORKDIR=$PWD
until test -d "$WORKDIR/infra/codedeploy"; do
# get the full path of the script
WORKDIR=$(dirname $WORKDIR)
done
I noticed that when I use backticks in perl the commands are executed using sh, not bash, giving me some problems.
How can I change that behavior so perl will use bash?
PS. The command that I'm trying to run is:
paste filename <(cut -d \" \" -f 2 filename2 | grep -v mean) >> filename3
The "system shell" is not generally mutable. See perldoc -f exec:
If there is more than one argument in LIST, or if LIST is an array with more than one value, calls execvp(3) with the arguments in LIST. If
there is only one scalar argument or an array with one element in it, the argument is checked for shell metacharacters, and if there are any, the
entire argument is passed to the system's command shell for parsing (this is "/bin/sh -c" on Unix platforms, but varies on other platforms).
If you really need bash to perform a particular task, consider calling it explicitly:
my $result = `/usr/bin/bash command arguments`;
or even:
open my $bash_handle, '| /usr/bin/bash' or die "Cannot open bash: $!";
print $bash_handle 'command arguments';
You could also put your bash commands into a .sh file and invoke that directly:
my $result = `/usr/bin/bash script.pl`;
Try
`bash -c \"your command with args\"`
I am fairly sure the argument of -c is interpreted the way bash interprets its command line. The trick is to protect it from sh - that's what quotes are for.
This example works for me:
$ perl -e 'print `/bin/bash -c "echo <(pwd)"`'
/dev/fd/63
To deal with running bash and nested quotes, this article provides the best solution: How can I use bash syntax in Perl's system()?
my #args = ( "bash", "-c", "diff <(ls -l) <(ls -al)" );
system(#args);
I thought perl would honor the $SHELL variable, but then it occurred to me that its behavior might actually depend on your system's exec implementation. In mine, it seems that exec
will execute the shell
(/bin/sh) with the path of the
file as its first argument.
You can always do qw/bash your-command/, no?
Create a perl subroutine:
sub bash { return `cat << 'EOF' | /bin/bash\n$_[0]\nEOF\n`; }
And use it like below:
my $bash_cmd = 'paste filename <(cut -d " " -f 2 filename2 | grep -v mean) >> filename3';
print &bash($bash_cmd);
Or use perl here-doc for multi-line commands:
$bash_cmd = <<'EOF';
for (( i = 0; i < 10; i++ )); do
echo "${i}"
done
EOF
print &bash($bash_cmd);
I like to make some function btck (which integrates error checking) and bash_btck (which uses bash):
use Carp;
sub btck ($)
{
# Like backticks but the error check and chomp() are integrated
my $cmd = shift;
my $result = `$cmd`;
$? == 0 or confess "backtick command '$cmd' returned non-zero";
chomp($result);
return $result;
}
sub bash_btck ($)
{
# Like backticks but use bash and the error check and chomp() are
# integrated
my $cmd = shift;
my $sqpc = $cmd; # Single-Quote-Protected Command
$sqpc =~ s/'/'"'"'/g;
my $bc = "bash -c '$sqpc'";
return btck($bc);
}
One of the reasons I like to use bash is for safe pipe behavior:
sub safe_btck ($)
{
return bash_btck('set -o pipefail && '.shift);
}