I am running a perl script over ssh like the below:
[dipuh#local ~]$ ssh dipuh#myremote_001 'perl /home/dipuh/a.pl'
The content of a.pl is the below:
print "Sleeping \n";
sleep(60);
print "Waking Up";
Here my local terminal waits for the perl script to execute completely and once finished displays the complete output. The initial "Sleeping" text also will be printed only with the final output.
Is there any way, in my local terminal, I can display the output of each command in the perl script at the run time, instead of waiting for the whole perl script to finish.
You are suffering from buffering.
You may either set $| to 1 for the block.
{
local $| = 1;
print "Sleeping \n";
sleep(60);
print "Waking Up";
}
Or use IO::Handle
use IO::Handle;
STDOUT->autoflush(1);
You can try to turn the autoflush mode on. The old fashioned way to it is by adding the following at the top of your script:
$| = 1;
or you can do it with the more modern way:
use IO::Handle;
STDOUT->autoflush(1);
Alternatively, you can flush the STDOUT on demand with:
use IO::Handle;
print "Sleeping \n";
STDOUT->flush;
sleep(60);
print "Waking Up";
STDOUT->flush;
Related
Today, I write a bash script to open and close a special python virtual envionment. You want to close the virtual environment, you must in the same bash process with command deactivate, so you can use command ~source~ to execute you bash script.(Preface)
Perl question
Here is my perl script
#!/usr/bin/perl
BEGIN{
$\="";
}
use warnings;
use strict;
use feature "switch";
use Cwd qw(chdir cwd);
no warnings "experimental::smartmatch";
our $path = $ENV{"PATH"};
if($path =~ /sspider\/bin/){
print "Scrapy virtual environment opened already\n";
print "Do you want to close it? [y/n]:";
chomp(my $answer = <STDIN>);
given($answer){
when(/[yY\n]/){
my #path = split(/:/, $path);
my $scrapy_path;
for (#path){
if(/sspider/){
$scrapy_path = $_;
}
};
print $ENV{PWD}, "\n";
chdir("$scrapy_path") or die "Can't goto scrapy bin directory";
print $ENV{PWD}, "\n";
system("deactivate");
print "Closed successfully\n";
}
}
}
Because when I execute the perl script, it is running in a new process, so it can't close the virtual environment.
So, I want to know how to solve it.(Execute a command in the current bash process in perl script.
Process
Perl
It is impossible to run Perl code as part of the current bash process. Bash cannot execute Perl code by its own so it needs to run the Perl interpreter - which is separate program and thus will run as a new process.
What might be done though is to create some file by the Perl program which then gets sourced by the shell - thus running the shell-instructions in this created file in the context of the current shell.
Consider this scenario:
We have three scripts:
script.pl
use strict;
use warnings;
print "\nStarting a blocking process";
print "\nRedirect the output of the blocking process to execution.log";
my $cmd = "perl d:\\blocking_script.pl >d:\\execution.log";
my $exitCode = system ($cmd);
print "\nAfter the execution of the blocking process";
print "\nNow I try to rename the log";
rename "d:\\execution.log", "d:\\execution.err" or print "\nCouldn't rename because : $!";
blocking_script.pl
use strict;
use warnings;
print "\nFrom the blocking_process I run a non-blocking process";
my $cmd = "start perl d:\\non_blocking_script.pl";
my $exitCode = system ($cmd);
print "\nAfter I started the non-blocking process";
non_blocking_script.pl
use strict;
use warnings;
print "\nI am an independent non-blocking process";
sleep 5;
print "\nStill here";
sleep 2;
print "\nYou can't rename the log because you didn't wait for me";
sleep 3;
print "\n.";
sleep 1;
What will result from this?
Couldn't rename because : Permission denied
While another command promopt will be hanging ironically :
I am an independent non-blocking process
Still here
You can't rename the log because you didn't wait for me
.
In my situation from perl I run an external application in a blocking way, but that application was starting some non-blocking process which were holding my log.
How can I overcome this situation?
Here is the documentation for start (which you should also be able to read by using start /? on the command line. I do not have access to a Windows system right now, so I can't verify.
/b
Starts an application without opening a new Command Prompt window. CTRL+C handling is ignored unless the application enables CTRL+C processing. Use CTRL+BREAK to interrupt the application.
blocking_script.pl is waiting for the cmd window which start opened to run non_blocking_script.pl.
In the short run, using start /b might help.
Or, you could try
my #cmd = start => qw(perl d:\\non_blocking_script.pl);
my $exitCode = system #cmd;
However, you should change your design.
I'm trying to test a piece of code ($code) that should make sure that only one instance of the program is running at a time:
#!/usr/bin/perl
# test_lock
use strict;
use warnings;
( my $code = <<'CODE') =~ s/^\s+//gm;
#!/usr/bin/perl
use strict;
use warnings;
use Fcntl qw(:flock);
# Make sure only one instance of the program is running at a time.
open our $Lock, '<', $0 or die "Can't lock myself $0: $!";
flock $Lock, LOCK_EX | LOCK_NB
or die "Another instance of $0 is already running. Exiting ...\n";
sleep(2);
CODE
my $progfile = '/tmp/x';
open my $fh, '>', $progfile or die $!;
print $fh $code;
close $fh;
$|++;
my $ex1 = system("perl $progfile &");
print "First system(): $ex1\n";
my $ex2 = system("perl $progfile");
print "Second system(): $ex2\n";
I expected that the second call to system() would return a non-zero value ($ex2) as it can't get the lock and dies. However I get:
$ perl test_lock
First system(): 0
Another instance of /tmp/x is already running. Exiting ...
Second system(): 0
What is wrong with my assumption? (Is there a better way to test the $code?)
I think it likely because you have a race condition. How do you know that error is actually coming from your second process?
Because if you for example, run:
perl /tmp/x & perl /tmp/x ; echo $?
You may get a zero return, because the 'winner' of the race may well be the latter process (which return code you're catching). (Try it a few times, and you'll see different results)
You also do have slight difference is what the shell is doing between the two commands - from the docs:
If there is only one scalar argument, the argument is checked for shell metacharacters, and if there are any, the entire argument is passed to the system's command shell for parsing (this is /bin/sh -c on Unix platforms, but varies on other platforms). If there are no shell metacharacters in the argument, it is split into words and passed directly to execvp , which is more efficient.
So actually you should see invocation of sh before perl in your first, which means it's actually more likely to take longer to get to the lock point.
That means your command is more like:
sh -c "perl /tmp/x"& perl /tmp/x; echo $?
Run that a few times and see how many times you get non-zero error codes. It's not often, because usually the 'delay' of the shell start up is enough to ensure that the second instance wins the race most of the time!
If you've linux - try strace -fTt yourscript which will trace the execution flow. Or you can make judicious use of $$ to report the process-pid when running.
In both cases, you are obtaining the exit code of the shell you launch. Roughly speaking, the shell returns the exit code of the last program it ran.
Since the shell created by system("perl $progfile &") doesn't wait for the child to end, it will virtually always return 0 since launching perl in the background is unlikely to result in an error.
So if the second instance of perl managed to obtain the lock first, you'll get the outcome you got. This race condition can be seem more clearly if you identify the source of the exception.
#!/usr/bin/perl
# test_lock
use strict;
use warnings;
( my $code = <<'CODE') =~ s/^\s+//gm;
#!/usr/bin/perl
use strict;
use warnings;
use Fcntl qw(:flock);
# Make sure only one instance of the program is running at a time.
open our $Lock, '<', $0 or die "Can't lock myself $0: $!";
flock $Lock, LOCK_EX | LOCK_NB
or die "$ARGV[0]: Another instance of $0 is already running. Exiting ...\n";
sleep(2);
CODE
my $progfile = 'b.pl';
open my $fh, '>', $progfile or die $!;
print $fh $code;
close $fh;
$|++;
my $ex1 = system("perl $progfile 1 &");
print "First system(): $ex1\n";
my $ex2 = system("perl $progfile 2");
print "Second system(): $ex2\n";
Output:
$ perl a.pl
First system(): 0
1: Another instance of b.pl is already running. Exiting ...
Second system(): 0
$ perl a.pl
First system(): 0
2: Another instance of b.pl is already running. Exiting ...
Second system(): 2816
I am trying to redirect the STDOUT and STDERR into a log file, but I also want to print those streams to the console. I am using Perl, and my code looks like this:
use Capture::Tiny ':all';
my $stderr, $stdout;
($stdout, $stderr) = capture {
system($command);
};
print $stdout;
print $stderr;
It works, but if the command waits for a user input, the program doesn't print $stdout to STDOUT until a key is pressed. Is there any way to print $stdout to STDOUT before it needs user input? Line by line approach would be fine.
Thank you in advance!
Well, I'm not familiar with Capture::Tiny so this may not be entirely relevant - generally though, if I'm looking to handle STDIN, STDOUT and/or STDERR then I look towards either open (if it's just one), or IPC::Open2 and [IPC::Open3][1] which open multiple file descriptors attached to a process.
use IPC::Open3;
$pid = open3(\*CHLD_IN, \*CHLD_OUT, \*CHLD_ERR,
'some cmd and args', 'optarg', ...);
use IPC::Open2;
$pid = open2(\*CHLD_OUT, \*CHLD_IN, 'some', 'cmd', 'and', 'args');
Although I would suggest rather than the examples - you can use lexical filehandles:
my($chld_out, $chld_in);
$pid = open2($chld_out, $chld_in, 'some cmd and args');
You can then read and write from your filehandles (bear in mind though - by default a read will be blocking).
You do need to close and then (ideally) waitpid to clear up the process when you're done though.
You need to use Capture::Tiny's tee instead of capture.
The tee function works just like capture, except that output is captured as well as passed on to the original STDOUT and STDERR.
Just replace the function call and your output will end up both in the variables and on the screen.
use Capture::Tiny ':all';
my $stderr, $stdout;
($stdout, $stderr) = tee {
system($command);
};
Simple approach that I could think of:
#! /usr/bin/perl -w
# Using perl one liner as a command here
# which prints a string to STDOUT and STDERR
my $cmd = "perl -e 'print STDOUT \"stdout\n\"; print STDERR \"stderr\n\";'";
my $log = "./test.log";
# using "2>&1" we are redirecting stderr also to stdout
system("$cmd 2>&1 | tee $log");
# Sample run results in both the strings getting printed to console as well as to log file
> perl test.pl
stderr
stdout
> cat test.log
stderr
stdout
I currently have a Perl script that runs an external command on the system, gathers the output, and performs some action based on what was returned. Right now, here is how I run this (where $cmd is a string with the command setup):
#output = `$cmd`;
I'd like to change this so if the command hangs and does not return a value after so much time then I kill the command. How would I go about running this asynchronously?
There's a LOT of ways to do this:
You can do this with a fork (perldoc -f fork)
or using threads (perldoc threads). Both of these make passing the returned information back to the main program difficult.
On systems that support it, you can set an alarm (perldoc -f alarm) and then clean up in the signal handler.
You can use an event loop like POE or Coro.
Instead of the backticks, you can use open() or respectively open2 or open3 (cf. IPC::Open2, IPC::Open3) to start a program while getting its STDOUT/STDERR via a file handle. Run non-blocking read operations on it. (perldoc -f select and probably google "perl nonblocking read")
As a more powerful variant of the openX()'s, check out IPC::Run/IPC::Cmd.
Probably tons I can't think of in the middle of the night.
If you really just need to put a timeout on a given system call that is a much simpler problem than asynchronous programming.
All you need is alarm() inside of an eval() block.
Here is a sample code block that puts these into a subroutine that you could drop into your code. The example calls sleep so isn't exciting for output, but does show you the timeout functionality you were interested in.
Output of running it is:
/bin/sleep 2 failure: timeout at
./time-out line 15.
$ cat time-out
#!/usr/bin/perl
use warnings;
use strict;
my $timeout = 1;
my #cmd = qw(/bin/sleep 2);
my $response = timeout_command($timeout, #cmd);
print "$response\n" if (defined $response);
sub timeout_command {
my $timeout = (shift);
my #command = #_;
undef $#;
my $return = eval {
local($SIG{ALRM}) = sub {die "timeout";};
alarm($timeout);
my $response;
open(CMD, '-|', #command) || die "couldn't run #command: $!\n";
while(<CMD>) {
$response .= $_;
}
close(CMD) || die "Couldn't close execution of #command: $!\n";
$response;
};
alarm(0);
if ($#) {
warn "#cmd failure: $#\n";
}
return $return;
}
If your external program doesn't take any input, look for the following words in the perlipc manpage:
Here's a safe backtick or pipe open for read:
Use the example code and guard it with an alarm (which is also explained in perlipc).
I coded below to run rsync on 20 directories simultaneously (in parallel instead of sequentially requiring me to wait hours for it to complete):
use threads;
for my $user ( keys %users ) {
my $host = $users{$user};
async {
system <<~ "SHELL";
ssh $host \\
rsync_user $user
SHELL
}
}
$ pgrep -lf rsync | wc -l
20
Not sure if it's best or even a good solution, but I was glad that it worked for my use case.
With this you get a mixed output on screen (what I ignored anyway), but it does its job successfully.
threads pragma exports the (very useful) async function by default.
rsync_user is my Perl script that wraps rsync command with options, and source and target directories set.
Ran on FreeBSD 13.1 with Perl 5.32.1