I have a script which executes few commands and then telnets to machine. Now I need to call this script from another perl script.
$result = `some_script.pl`;
The script some_script.pl executes successfully but I am not able to exit from the main script as the script waits at the telnet prompt.
I also need to capture the exit status of the script in order to make sure that some_script.pl executed successfully.
I cannot modify some_script.pl.
Is there some way by which I can issue quit after the some_script.pl is executed successfully?
Try this out, this 'magic' close the standard in/out/err and may let your program finish.
$result = `some_script.pl >&- 2>&- <&-';
Otherwise you could use open2 and expect to watch for a specific string (like Done!) in your program output and close it when done.
http://search.cpan.org/~rgiersig/Expect-1.15/Expect.pod
Regards
I don't like the way you are actually executing your perl script with a "backtick" call to the system.
I suggest you actually fork (or something equivalent) and run the program in a more controlled manner.
use POSIX ":sys_wait_h";
my $pid = fork();
if($pid) { # on the parent proc, $pid will point to the child
waitpid($pid); # wait for the child to finish
} else { # this is the child, where we want to run the telnet
exec 'some_script.pl'; # this child will now "become" some_script.pl
}
Since I don't know how some_script.pl actually works, I cannot really help you more here. But for example, if all you need to do is print "quit" on the command line of some_script.pl you could use IPC::Open2 like suggested in another question. Doing something like:
use IPC::Open2;
$pid = open2(\*CHLD_OUT, \*CHLD_IN, 'some_script.pl');
print CHLD_IN "quit\n";
waitpid( $pid, 0 );
my $child_exit_status = $? >> 8;
You do need to tweak this a little, but the idea should solve your problem.
Related
I am working on a capstone project and am hoping for some insight.
This is the first time I've worked with Perl and it's pretty much a basic Perl script to automate a few different Unix commands that need to be executed in a specific order. There are two lines throughout the script which executes a Unix command that needs to finish processing before it is acceptable for the rest of the script to run (data will be incorrect otherwise).
How am I able to use Perl (or maybe this is a Unix question?) to print a simple string once the Unix command has finished processing? I am looking into ways to read in the Unix command name but am not sure how to implement a way to check if the process is no longer running and to print a string such as "X command has finished processing" upon it's completion.
Example:
system("nohup scripts_pl/RunAll.pl &");
This runs a command in the background that takes time to process. I am asking how I can use Perl (or Unix?) to print a string once the process has finished.
I'm sorry if I didn't understand your asking context.
But couldn't you use perl process fork function instead of & if you would like to do parallel process?
# parent process
if (my $pid = fork) {
# this block behaves as a normal process
system("nohup scripts_pl/RunAll2.pl"); # you can call other system (like RunAll2.pl)
wait; # wait for the background processing
say 'finished both';
}
# child process
else {
# this block behaves as a background process
system("nohup scripts_pl/RunAll.pl"); # trim &
}
You could try to use IPC::Open3 instead of system:
use IPC::Open3;
my $pid = open3("<&STDIN", ">&STDOUT", ">&STDERR", 'nohup scripts_pl/RunAll.pl');
waitpid( $pid, 0 );
Or, if you need to run nohup through the shell:
my $pid = open3("<&STDIN", ">&STDOUT", ">&STDERR", 'bash','-c', 'nohup scripts_pl/RunAll.pl & wait');
Update: Thanks to #ikegami. A better approach if you would like STDIN to stay open after running the command:
open(local *CHILD_STDIN, "<&", '/dev/null') or die $!;
my $pid = open3("<&CHILD_STDIN", ">&STDOUT", ">&STDERR", 'nohup scripts_pl/RunAll.pl');
I'm trying to grasp the concept of fork() & exec() for my own learning purposes. I'm trying to use perl fork as a second identical process, and then use that to exec a .sh script.
If I use fork() & exec() can I get the .sh script to run in parallel to my perl script? Perl script doesn't wait on the child process and continues on its execution. So my perl script doesn't care about the output of the child process, but only that the command is valid and running. Sort of like calling the script to run in the background.
Is there some sort of safety I can implement to know that the child process exited correctly as well?
If I use fork() & exec() can I get the .sh script to run in parallel to my perl script? [...] Sort of like calling the script to run in the background.
Yes. Fork & exec is actually the way shells run commands in the background.
Is there some sort of safety I can implement to know that the child process exited correctly as well?
Yes, using waitpid() and looking at the return value stored in $?
Like #rohanpm mentioned, the perlipc man page has a lot of useful examples showing how to do this. Here is one of the most relevant, where a signal handler is set up for SIGCHLD (which will be sent to the parent when the child terminates)
use POSIX ":sys_wait_h";
$SIG{CHLD} = sub {
while ((my $child = waitpid(-1, WNOHANG)) > 0) {
$Kid_Status{$child} = $?;
}
};
To get waitpid to not wait for the child:
use POSIX qw/ WNOHANG /;
my $st = waitpid $pid, WNOHANG;
$st is 0 if the process is still running and the pid if it's reaped.
I need to run external tool from within my Perl code. This command works for a pretty long time, prints almost nothing to STDOUT but creates a log file.
I would like to run it and in parallel read and process its log file. How can I do it in Perl?
Thanks in advance.
If you use something like File::Tail to read the log file, then you can do a simple fork and exec to run the external command. Something like the following should work:
use strict;
use warnings;
use File::Tail;
my $pid = fork;
if ( $pid ) {
# in the parent process; open the log file and wait for input
my $tail = File::Tail->new( '/path/to/logfile.log' );
while( my $line = $tail->read ) {
# do stuff with $line here
last if $line eq 'done running'; # we need something to escape the loop
# or it will wait forever for input.
}
} else {
# in the child process, run the external command
exec 'some_command', 'arg1', 'arg2';
}
# wait for child process to exit and clean it up
my $exit_pid = wait;
If there are problems running the child process, the exit return code will be in the special variable $?; see the documentation for wait for more information.
Also, if the logging output does not provide a clue for when to stop tailing the file, you can install a handler in $SIG{CHLD} which will catch the child process's termination signal and allow you to break out of the loop.
I am using Perl to execute psexec and capture the output from the console. What seems odd to me is that when I execute the command with backticks, it correctly captures output every time.
For example, this Perl script works, and I've used this for many years on many different configurations:
use strict;
my #out;
#out = `psexec \\\\192.168.1.105 -u admin -p pass netstat -a`;
print #out;
This Perl script fails, and seems to reliably cause psexesvc to hang on the remote system:
use IPC::Open2;
my($chld_out, $chld_in, $pid);
$pid = open2($chld_out, $chld_in, 'psexec \\\\192.168.1.105 -u admin -p pass netstat -a');
waitpid( $pid, 0 );
my $child_exit_status = $? >> 8;
my $answer = <$chld_out>;
print "\n\n answer: $answer";
What is so strange to me is that backticks seem to never have any problem. Everything else does, including examples in C++ from MSDN.
My suspicion is that the problem with IPC::Open2 and the example in C++ (linked above) is related to the fact that I'm redirecting STDIN and STDOUT from the command shell (cmd.exe), and the child process (psexec) does the same thing when communicating with my remote system.
Also, where in the perldocs can I find detailed information on how backticks work? I'm most interested in their "internals" on Windows.
Or, where in the Perl source can I review the inner workings of backticks (that may be biting off more than I can chew, but it's worth a shot at this point).
UPDATE:
Following Andy's suggestion, I found this works:
use IPC::Open2;
my($chld_out, $chld_in, $pid);
$pid = open2($chld_out, $chld_in, 'psexec \\\\192.168.1.105 -u admin -p pass netstat -a');
my #answer = <$chld_out>;
print "\n\n answer: #answer";
waitpid( $pid, 0 );
my $child_exit_status = $? >> 8;
I know very little about how this works on windows, so maybe somebody can provide a more specific answer, but when piping between processes in perl, you need to be careful to avoid undesired blocking and deadlocks. There is some discussion of various problem scenarios in perlipc.
In your example, the immediate call to waitpid causes problems. One possibility is that the child cannot exit until something reads the output, so everything hangs since the parent is not going to read the output. Another possibility is that part of the data stream is shut down as part of the waitpid call, and this causes a problem with the remote process.
In any case, it would be better to read the output from the child process before calling waitpid.
I have a perl script, script.pl which, when run, does a fork, the parent process outputs its pid to a file then exits while the child process outputs something to STOUT and then goes into a while loop.
$pid = fork();
if ( ! defined $pid )
{
die "Failed to fork.";
}
#Parent process
elsif($pid)
{
if(!open (PID, ">>running_PIDs"))
{
warn "Error opening file to append PID";
}
print PID "$pid \n";
close PID;
}
#child process
else
{
print "Output started";
while($loopControl)
{
#Do some stuff
}
}
This works fine when I call it locally ie: perl script.pl.
The script prints out some things then returns control back to the shell. (while the child process goes off into its loop in the background).
However, when I call this via ssh control is never returned back to the shell (nor is the "Output started" line ever printed.
ie:
$ ssh username#example.com 'perl script.pl'
However, the interesting thing is, the child process does run (I can see it when I type ps).
Can anyone explain whats going on?
EDIT:
I ran it under debug and got this:
### Forked, but do not know how to create a new TTY.
Since two debuggers fight for the same TTY, input is severely entangled.
I know how to switch the output to a different window in xterms
and OS/2 consoles only. For a manual switch, put the name of the created TTY
in $DB::fork_TTY, or define a function DB::get_fork_TTY() returning this.
On UNIX-like systems one can get the name of a TTY for the given window
by typing tty, and disconnect the shell from TTY by sleep 1000000.
Whenever you launch background jobs via non-interactive ssh commands, you need to close or otherwise tie off stdin, stdout, & stderr. Otherwise ssh will wait for the backgrounded process to exit. FAQ.
This is called disassociating or detaching from the controlling terminal and is a general best practice when writing background jobs, not just for SSH.
So the simplest change that doesn't mute your entire command is to add:
#close std fds inherited from parent
close STDIN;
close STDOUT;
close STDERR;
right after your print "Output started";. If your child process needs to print output periodically during its run, then you'll need to redirect to a log file instead.
ssh username#example.com 'nohup perl script.pl'
You aren't able to exit because there's still a process attached. You need to nohup it.
What is happening is that ssh is executing 'perl script.pl' as a command directly. If you have 'screen' available, you could do:
$ ssh username#example.com 'screen -d -m perl script.pl'
to have it running on a detached screen, and reattach later with screen -r
To understand this better I would recommend reading #Jax's solution on
Getting ssh to execute a command in the background on target machine
It's not to do with Perl. It's becaue of the way SSH handles any long-running process you're trying to background.
I need to launch script.pl from a bash script (to define essential local variables on the target host):
$ ssh username#example.com /path/to/launcher.sh
/path/to/launcher.sh was invoking the Perl script with:
CMD="/path/to/script.pl -some_arg=$VALUE -other_arg"
$CMD &
which worked locally, but when run via ssh it didn't return.
I tried #pra's solution inside the Perl script, but it didn't work in my case.
Using #Jax's solution, I replaced $CMD & with this:
nohup $CMD > /path/to/output.log 2>&1 < /dev/null &
and it works beautifully.