-t in the body of a perl script - perl

I have a script with a line like this:
$foo = $bar if -t;
Near as I can tell, this is saying,
if this script is run from a terminal, set $foo to $bar.
If this script was run from cron, that would evaluate to false.
Have I got this right?

In perldoc perlfunc for the collection of functions called -X, you can read:
-t Filehandle is opened to a tty.
Also
If the argument is omitted, tests $_, except for -t, which tests STDIN.
Which is to say your code does -t STDIN.

The -t file test is documented in perlfunc, although you get to by looking up -X instead of the specific file test:
% perldoc -f -X
Depending on your task, IO::Interactive may do the job better since there can be a few gotchas with figuring out if something is truly interactive.
If you want to know that you are running under cron (and not non-interactive in some other way), you might consider have a variable set in your crontab (or using one already set) and simply looking for it. In your crontab:
IN_CRON=1
Then, in the script:
do_something() if $ENV{IN_CRON};

Related

Run a sed search and replace inside perl

I am trying to test the code snippet below for a bigger script that I am writing. However, I can't get the search working with parentheses and variables.
Appreciate any help someone can give me.
Code snippet:
#!/usr/bin/perl
$file="test4.html";
$Search="Help (Test)";
$Replace="Testing";
print "/usr/bin/sed -i cb 's/$Search/$Replace/g' $file\n";
`/usr/bin/sed -i cb 's/$Search/$Replace/g' $file`;
Thanks,
Ash
The syntax to run a command in a child process and wait for its termination in perl is system "cmd", "arg1", "arg2",...:
#!/usr/bin/perl
$file="test4.html";
$Search="Help (Test)";
$Replace="Testing";
print "/usr/bin/sed -icb -e 's/$Search/$Replace/g' -- $file\n";
system "/usr/bin/sed", "-icb", "-e", "s/$Search/$Replace/g", "--", $file;
(error checking left as an exercise, see perldoc -f system for details)
Note that -i is not a standard sed option. The few implementations that support it (yours must be the FreeBSD one as you've separated the cb backup extension from -i) have actually copied it from perl! It does feel a bit silly to be calling sed from perl here.
Looking at your approach:
The `...` operator itself is reminiscent of the equivalent `...` shell operator. In perl, what's inside is evaluated as if inside double quoted, in that $var, #var... perl variables are expanded, and a shell is started with -c and the resulting string as arguments and with its stdout redirected to a pipe.
The shell interprets that argument as code in the shell syntax. Perl reads the output of that inline shell script from the other end of the pipe and that makes up the expansion of `...`. Same as in shell command substitution except that there's is no stripping of zero bytes or of trailing newlines.
sed -i produces no output, so it's pointless to try and capture its output with `...` here.
Now in your case, the code that sh is asked to interpret is:
/usr/bin/sed -i cb 's/Help (Test)/Testing/g' test4.html
That should work fine on FreeBSD or macOS at least. If $file had been test$(reboot).html, that would have been worse though.
Here, because you have the contents of variables that end up interpreted as code in an interpreter (here sh), you have a potential arbitrary command injection vulnerability.
In the system approach, we remove sh, so that particular vulnerability is removed. However sed is also an interpreter of some language. That language is not as omnipotent as that of sh, but for instance sed can write to arbitrary files with its w command. The GNU implementation (which you don't seem to be using) can run arbitrary commands as well.
So you still potentially have a code injection vulnerability in the case of $Search or $Replace coming from an external source.
If that's the case, you'd need to make sure your properly sanitise those values before running sed. See for instance: How to ensure that string interpolated into `sed` substitution escapes all metachars

parsing first entry of a find call in perl?

I need to get an example file file from a find command in a Perl script to create another system call afterwards. For some reason, the find command gets stuck when I call it from the script. Here is what I need to do:
my $search_dir = "/something/like/this/??/??/??";
# the triple '??' are needed here
my $cmd = "find $search_dir -name \"\*.$var1.token1.$var2.ext\" | head -n 1";
my $first_example_file = `$cmd`; chomp $first_example_file;
This gets stuck when I run it through Perl, it never finishes executing the command, whereas the constructed $cmd runs in no time if I copy+paste it and run in in my bash terminal. Any ideas?
Try using the File::Find perl module for finding files. If you would like to use bash's find in your perl then you might have to use $(..) in your command.
I am not in to perl … just trying to help out.
Update:
As stated in the comments by Rohaq you can also use File::Find::Rule
I'd wager globbing (shell metacharacter expansion) is involved. But regardless, try and chop the command up. Does it work without the pipe? What about without the ?? in the pathname? What happens if you prepend 'echo' ("echo find ...")? Still hanging? Then you can try it under perl -d - the debugger; perldoc perldebug is your friend.

Eval for multiple command execution in ksh93, Solaris

I would like to execute two or more commands back to back . But these commands are stored in a variable in my script. For example,
var="/usr/bin/ls ; pwd ; pooladm -d; pooladm -e"
The problem arises when I execute this variable via my script.
Suppose I go:
#!/bin/ksh -p
..
..
var="/usr/bin/ls ; pwd;pooladm -d; pooladm -e"
..
..
$var # DOES NOT WORK ..BUT WORKS WITH EVAL
It doesn't work ..
But the moment I use eval :
eval $var
It works brilliantly.
I was just wondering if there is any other way to execute a bunch of commands stored in a variable without using eval.
Also , Is eval usage considered a bad programming practice because my coding standards appear to shun its usage than embrace it . Please do let me know.
Remember that the shell only parses the line once. So when you expand your $var, it becomes one string containing blanks. Since you have no executable named '/usr/bin/ls ; pwd;pooladm -d; pooladm -e', it can't run it.
On the other hand, eval takes its arguments are re-scans them, now you get '/usr/bin/ls', 'pwd', and so on. It works.
eval is a little chancy because it leaves a possible security hole -- consider if someone managed to get 'rm -rf /' into the string. But it's a useful tool.
Use backticks and echo. In your case
`echo $var`
You could invoke another copy of the shell to run the command:
sh -c "$var"
This isn't necessarily better than using eval. The main practical difference is that eval will run the commands in the context of the current shell, while "sh -c" runs the commands in a separate shell instance. If var contains commands to set environment variables or change the current directory, you or may not want those commands to affect the current shell.

jsvc (tomcat) does not daemonize properly when run with backticks and then defuncts

In debian lenny, when running /etc/init.d/tomcat5.5 start, it runs jsvc and expects it to daemonize itself.
From a simple bash shell, this works fine.
However, from a script, this gets completely stuck:
For example, the following works like a charm:
#!/usr/bin/perl
my $cmd = '/etc/init.d/tomcat5.5 start';
system($cmd);
However, the following gets stuck as jsvc does not daemonize:
#!/usr/bin/perl
my $cmd = '/etc/init.d/tomcat5.5 start';
`$cmd`;
It also gets stuck when running it using backticks in bash:
#!/bin/bash
CMD='/etc/init.d/tomcat5.5 start'
`$CMD`
Is this a bug in jsvc? Any idea why this works in a shell or using system() , but not using backticks? I am actually getting defunct/zombie processes because of this issue.
Just a hunch -- for a job to become a daemon it needs to close any file descriptors that were opened in its parent process. Perhaps this is easier to do with system than with backticks/readpipe, though I can't come up with any good reasons why that would be so. What if you used the backticks like:
`$CMD < /dev/null > /dev/null 2>&1`
Backticks will evaluate to the output of the command, if there's lots of data, you may fill the buffer. No need to use the backticks if you don't want to evaluate or catpure the output in the script itself.
In example, this bash script should work:
#!/bin/bash
CMD="/etc/init.d/tomcat5.5 start"
# note no backticks
$CMD
Also please define "daemonize"? You want this nohup'd and asynchronous?

Is it possible to make 'exec' use '$SHELL -c' instead of '/bin/sh -c' in Perl?

In Perl, is it possible to make 'exec', 'system', and 'qx' use a shell other than /bin/sh (without using a construct like 'exec "$SHELL -c ..."', and without recompiling perl)?
EDIT: The motivation for this question is a bash script that does 'export -f foo' and then uses perl in a subshell to invoke the function directly via 'system "foo"'. I am not sure that this technique will work with all sh, and although 'system "/bin/bash -c foo"' may work in that scenario, I wouldn't expect the exported function to propagate through all variants of /bin/sh. But mostly I was just curious, and am now curious about how to extend the solution to qx. Also, since I know nothing about non-unix platforms, I'd like to avoid hard coding the path to an alternate shell in the solution.
You can override exec and system. See perldoc perlsub for the details, but here is roughly what you want (modulo some quoting bugs I don't feel like trying to fix):
#!/usr/bin/perl
use strict;
use warnings;
use subs qw/system/;
sub system {
#handle one arg version:
if (#_ == 1) {
return CORE::system "$ENV{SHELL} -c $_[0]";
}
#handle the multi argument version
return CORE::system #_;
}
print "normal system:\n";
system "perl", "-e", q{system q/ps -ef | grep $$/};
print "overloaded system:\n";
system 'ps -ef | grep $$';
exec and system will use the shell (which will likely not be /bin/sh on non-UNIX systems) if you only pass one argument to it. (Details are described in perlfunc)
You may want to have a look at IPC::Run3 as an alternative to system
Why don't you want to use 'exec "$SHELL -c ..."'? If you don't want see that code every time you call exec or system, just hide it in a subroutine. That's what they're there for. :)
sub my_exec {
exec $ENV{SHELL}, '-c', #_;
}
If you want to do that, however, I suggest somehow sanitizing $ENV{SHELL} so that people don't do odd things to your script by setting weird values. You might want to ensure that the shell is listed in /etc/shells or whatever way your system lists approved login shells. You also need to do a bit more work to make this taint-clean, which you should probably do if you are going to send data to another process.
exec doesn't use /bin/sh
It just execs the program you specify. No shells.
If you want it to go through a shell you have to do that yourself.