Running a program that requires a password on the command line from a Perl script - perl

I have written a Perl wrapper around a shell script. I am using IPC::Run::Simple to execute system commands. As an example:
run ("mkdir ~$usr/12.2.0_cp/faiz_cpv/$pdate") or die "Error $ERR";
run ("cp ~$usr/12.2.0_cp/faiz_cpv/MPlist.lst ~$usr/12.2.0_cp/faiz_cpv/$pdate") || die "Error: $ERR";
run ("cd ~$usr/12.2.0_cp/faiz_cpv/$pdate; sh /opsutils/mfg_top/rel/CPV/bin/list_generation.sh . MPlist.lst mfg_relall_us\#oracle.com") or die "error $ERR";
.
.
One of these shell scripts requires the user of the script to enter their password. That is, a message is printed on stdout and the password is accepted via the shell. A number of calls are made to this shell script during the entire process which means a user must reenter his password a number of times.
Is there a way by which I can request user for the password at the command line itself, and pass that password implicitly instead of prompting user for the password again and again?

Perl has mkdir and chdir built in and File::Copy provides a copy routine. Its generally safer and faster to use them than shelling out. Though it will not translate ~ for you. File::chdir makes changing a directory and running a command a little safer.
For the rest, use the full IPC::Run to control interacting with your program and Term::ReadLine::Gnu to read the password without displaying it. Sorry this is just a sketch and not a full answer.

Related

How to pass password via Perl script

I have a situation where in I am calling the below Perl script
if (param("lremail") && param("lot")) {
my $address=param("lremailaddress");
my $lot=param("lot");
print a({-href=>"$dir/new.pl"},"Back to Top"),br;
print "Request submitted for $address.",br;
print "Lot $lot",br;
print "You will receive an e-mail with a link to the data when the request is complete.";
print end_html;
system ("ssh SERVERNAME /test/abc.csh $lot $$ $address &");
exit(1);
The above script does not run because when I execute the system is prompted for a password. Then I looked it up and found the below command..
expect -c 'spawn ssh SERVERNAME /test/abc.csh J213520 06 abc#gmail.com "ls -lh file"; expect "Password:"; send "PASSWORD\r"; interact'
The above command is executed successfully without any issue but from the command line only. When I incorporate the same(by replacing the system call) within the Perl script, it fails. How can I incorporate within the first script?
Reiterating and adding to comments:
Consider using a key-based authentication either with passphrase-less keys or with ssh-agent (e.g., using ssh-keygen generated/managed identities);
Consider using sshpass or another expect-like external;
Consider using the Perl Expect or an equivalent CPAN module; and/or,
Consider using the Perl Net::SSH or an equivalent CPAN module.
Also, system can easily introduce remote code execution vulnerabilities, especially when using its system LIST syntax.

Using expect from Perl

I have a situation where in I am calling the below Perl script
if (param("lremail") && param("lot")) {
my $address=param("lremailaddress");
my $lot=param("lot");
print a({-href=>"$dir/new.pl"},"Back to Top"),br;
print "Request submitted for $address.",br;
print "Lot $lot",br;
print "You will receive an e-mail with a link to the data when the request is complete.";
print end_html;
system ("ssh SERVERNAME /test/abc.csh $lot $$ $address &");
exit(1);
The above script does not run because when I execute the system is prompted for a password. Then I looked it up and found the below command..
> expect -c 'spawn ssh SERVERNAME /test/abc.csh J213520 06 abc#gmail.com "ls -lh file"; expect "Password:"; send "PASSWORD\r"; interact'
The above command is executed successfully without any issue but from the command line only. When I incorporate the same(by replacing the system call) within the Perl script, it fails.
How can I incorporate within the first script?
There's an Expect module for Perl.
However, I tend to write straight expect scripts and call them from Perl. That way I can use the expect scripts on their own. But then, I used to do a lot of Tcl too.

How to set a crontab job for a perl script

I have a Perl script which I want to run every 4 hours through cron. But somehow it fails to execute through cron and runs fine if I run it through command line. Following is the command which I set in crontab:
perl -q /path_to_script/script.pl > /dev/null
Also, when I run this command on command prompt, it does not execute but when I go in the leaf folder in path_to_script and execute the file, it runs fine.
Also, where will the log files of this cron job be created so that I can view them?
You should probably change the working directory to "leaf folder".
Try this in your crontab command:
cd /path_to_script; perl script.pl >/dev/null
Wrt. log files. Cron will mail you the output. But since you sent stdout to /dev/null, only stderr will be mailed to you.
If you want the output saved in a log file, then pipe the stderr/stdout output of the script into a file, like so:
cd /path_to_script; perl script.pl 2>&1 >my_log_file
Usually cron will send you mail with the output of your program. When you're figuring it out, you probably want to check the environment. It won't necessarily be the same environment as your login shell (since it's not a login shell):
foreach my $key ( keys %ENV ) {
printf "$key: $$ENV{$key}\n";
}
If you're missing something you need, set it in your crontab:
SOME_VAR=some_value
HOME=/Users/Buster
If you need to start in a particular directory, you should chdir there. The starting directory from a cron job probably isn't what you think it is. Without an argument, chdir changes to your home directory. However, sometimes those environment variables might not be set in your cron session, so it's probably better to have a default value:
chdir( $ENV{HOME} || '/Users/Buster' );
At various critical points, you should give error output. This is a good thing even in non-cron programs:
open my $fh, '<', $some_file or die "Didn't find the file I was expecting: $!";
If you redirect things to /dev/null, you lose all that information that might help you solve the problem.
looks like you may have missed the
#!/usr/bin/perl
at the start of your perl script which is why you might need perl -q to run it
once you have added that line you can run it directly from the command line using
/path_to_script/script.pl
If you use a command in your perl program, i advise you to put the full path to the command in your program.
I have try to load environment but it is not more helpful.
After a oversee with one colleague, i think it's from interaction between perl and the system environment.
Best regards,
Moustapha Kourouma

perl fork doesn't work properly when run remotely (via ssh)

I have a perl script, script.pl which, when run, does a fork, the parent process outputs its pid to a file then exits while the child process outputs something to STOUT and then goes into a while loop.
$pid = fork();
if ( ! defined $pid )
{
die "Failed to fork.";
}
#Parent process
elsif($pid)
{
if(!open (PID, ">>running_PIDs"))
{
warn "Error opening file to append PID";
}
print PID "$pid \n";
close PID;
}
#child process
else
{
print "Output started";
while($loopControl)
{
#Do some stuff
}
}
This works fine when I call it locally ie: perl script.pl.
The script prints out some things then returns control back to the shell. (while the child process goes off into its loop in the background).
However, when I call this via ssh control is never returned back to the shell (nor is the "Output started" line ever printed.
ie:
$ ssh username#example.com 'perl script.pl'
However, the interesting thing is, the child process does run (I can see it when I type ps).
Can anyone explain whats going on?
EDIT:
I ran it under debug and got this:
### Forked, but do not know how to create a new TTY.
Since two debuggers fight for the same TTY, input is severely entangled.
I know how to switch the output to a different window in xterms
and OS/2 consoles only. For a manual switch, put the name of the created TTY
in $DB::fork_TTY, or define a function DB::get_fork_TTY() returning this.
On UNIX-like systems one can get the name of a TTY for the given window
by typing tty, and disconnect the shell from TTY by sleep 1000000.
Whenever you launch background jobs via non-interactive ssh commands, you need to close or otherwise tie off stdin, stdout, & stderr. Otherwise ssh will wait for the backgrounded process to exit. FAQ.
This is called disassociating or detaching from the controlling terminal and is a general best practice when writing background jobs, not just for SSH.
So the simplest change that doesn't mute your entire command is to add:
#close std fds inherited from parent
close STDIN;
close STDOUT;
close STDERR;
right after your print "Output started";. If your child process needs to print output periodically during its run, then you'll need to redirect to a log file instead.
ssh username#example.com 'nohup perl script.pl'
You aren't able to exit because there's still a process attached. You need to nohup it.
What is happening is that ssh is executing 'perl script.pl' as a command directly. If you have 'screen' available, you could do:
$ ssh username#example.com 'screen -d -m perl script.pl'
to have it running on a detached screen, and reattach later with screen -r
To understand this better I would recommend reading #Jax's solution on
Getting ssh to execute a command in the background on target machine
It's not to do with Perl. It's becaue of the way SSH handles any long-running process you're trying to background.
I need to launch script.pl from a bash script (to define essential local variables on the target host):
$ ssh username#example.com /path/to/launcher.sh
/path/to/launcher.sh was invoking the Perl script with:
CMD="/path/to/script.pl -some_arg=$VALUE -other_arg"
$CMD &
which worked locally, but when run via ssh it didn't return.
I tried #pra's solution inside the Perl script, but it didn't work in my case.
Using #Jax's solution, I replaced $CMD & with this:
nohup $CMD > /path/to/output.log 2>&1 < /dev/null &
and it works beautifully.

Missing output when running system command in perl/cgi file

I need to write a CGI program and it will display the output of a system command:
script.sh
echo "++++++"
VAR=$(expect -c " spawn ssh -o StrictHostKeyChecking=no $USER#$HOST $CMD match_max
100000 expect \"*?assword:*\" send -- \"$PASS\r\" send -- \"\r\" expect eof ")
echo $VAR
echo "++++++"
In CGI file:
my $command= "ksh ../cgi-bin/script.sh";
my #output= `$command`;
print #output;
Finally, when I run the CGI file in unix, the $VAR is a very long string including \n and some delimiters. However, when I run on web server, the output is
++++++
++++++
So $VAR is missing when passing in the web interface/browser.
I know maybe the problem is $VAR is very long string.
But anyway, is there anyway to solve this problem except writing the output to a file then retrieve it from browser?
Thanks if you are interested in my question.
script.sh uses several environment variables: $USER, $HOST, $CMD and $PASS. The CGI environment will have different environment variables set than a login shell. You may need to set these variables from your CGI script before calling script.sh.
Try finding where commands like expect and ssh that you are calling are on your system and adding their directory paths to the PATH used by your script.
I.e.
which expect
returns /usr/bin/expect then add the line:
PATH=$PATH:/usr/bin && export PATH
near the beginning of the ksh script. During debug you may also want to redirect stderr to a file by appending 2>/tmp/errors.txt to the end of your command since stderr is not shown in the browser.
my $command= "ksh ../cgi-bin/script.sh 2>/tmp/errors.txt";