I am writing a perl code to upload code from a repro to a directory(jsvn update . a shell comand in my case) . I wanted that while the check in is going on, the result should display in stdout ('jsvn update .' does show that but i have to keep on looking at the monitor in case of any error and incase of error i have to give a clean up and start the process again.) I wrote a program for that, but it doesnot displays output to screen. The cursor keeps blinking and i know the process is going on background, but i want to have the results also displayed to stdout. Please help me.
#!usr/bin/perl
use Capture::Tiny qw/tee/;
sub code(){
`jsvn cleanup .`;
($stdout, $stderr, #result) = tee { system( "jsvn update ." ) };
print "#result\n";
}
code();
if($stderr){
code();
}else{
print "The checkout has been done successfully \n";
exit;
}
If you wanna use IPC::System::Simple you could grab exit values through $EXITVAL doing something like this:
...
use IPC::System::Simple qw[capture $EXITVAL];
use feature qw[switch];
...
my #result = capture('jsvn update .');
given ($EXITVAL) {
when (0) {
print "Ok\n";
}
when (1) {
}
..
when (N) {
}
}
...
Related
I have written a small script from which I'm calling another script.
Code:
package.PL
use strict;
no warnings 'experimental::smartmatch';
use feature qw(switch);
print"\nPlease enter Perl Installation Path\n";
my $path=<>;
$path=~ s/^\s+|\s+$//;
while(1){
print "\nEnter your Choice : \n";
print "1.Premigration Script for active records\n";
print "2.Premigration Script for archival records\n";
print "3.Post Migration Script\n";
print "4.Cleanup Script\n";
print "5.Exit\n";
my $input=<>;
given($input){
when(1) {system("$path/perl export_from_ddts.pl configfile_active.ini");system("$path/perl convert_to_csv.pl configfile_active.ini");}
when(2) {system("$path/perl export_from_ddts.pl configfile_archived.ini");system("$path/perl convert_to_csv.pl configfile_archived.ini");}
when(3) {system("$path/perl post_migration.pl configfile_active.ini");}
when(4) { system("$path/perl cleanup.pl");}
default {
if($input > 4){
print "\nYou want to exit the menu? y/n\n";
my $state=<>;
$state =~ s/^\s+|\s+$//g ;
if($state eq 'y'){
last;
}
else{
continue;
}
}
}
}
}
If I call any script from Package.pl, It is running twice.
For example: If I select option 1 to run pre-migration script for active projects, I'm getting same output twice.
perl version:5.18.1
I'm running on another perl installation(not system perl) in linux.
At first you could use $^X for the PATH of Perl or $EXECUTABLE_NAME when you use use English
You see the output twice? - Your code is ok. Why do you use no warnings 'experimental::smartmatch when you don use any smartmatch operators?
I have tried to reconstruct your code but its working for me.
I am using the following lib Net::SSH2
I can connect to my device and get the output OK in most case. Below is the relevant code:
sub logIntoDevice
{
my $self = shift;
my ($ssh2) = #_;
if(! $ssh2->connect($self->deviceIP))
{
say "Failed to connect to device:",$self->deviceIP;
$ssh2->disconnect();
exit 0;
}
if(! $ssh2->auth_password($self->usr, $self->pass))
{
say "Authentication Fail to ",$self->deviceIP;
$ssh2->disconnect();
exit 0;
}
my $channel = $ssh2->channel();
$channel->blocking(0);
$channel->shell();
return $channel;
}
sub sendCmd
{
my $self = shift;
my ($cmd,$channel) = #_;
my #cmdOutput;
print $channel "$cmd\n";
while (<$channel>)
{
chomp $_;
push(#cmdOutput, $_);
}
return #cmdOutput;
}
So below are the cmd i sent to the sub's. They work fine and the output is write to file OK.
$self->writeToFile($fileName,$self->sendCmd("show clock",$channel));
$self->writeToFile($fileName,$self->sendCmd("\n",$channel));
$self->writeToFile($fileName,$self->sendCmd("dir",$channel));
with the exception of when i sent the following cmd:
$self->writeToFile($fileName,$self->sendCmd("sh run",$channel));
the output of the cmd on the device using putty is:
sh run
Building configuration...
Current configuration : 16575 bytes
!
!
!Last configuration change at 16:37:19 CET+1 Sat Mar 15 2014
.....
but in the log file all you see is
sh run
Building configuration...
so the issue is the blank lines after the Building configuration output make the while (<$channel>) think its the end of the output.
My issue is I cant figure a way to loop through the data without using a While loop.
UPDATE
Ok come up with this solution but seems very clunky. must be a better way if doing this
sub sendCmd
{
my $self = shift;
my ($cmd,$channel) = #_;
my #cmdOutput;
my $currPrompt;
#get prompt. i am sure there is a better way!!! just cant figure it out
print $channel "\n";
while (<$channel>)
{
$currPrompt = $_;
}
print $channel "$cmd\n";
while(42)
{
my $inerOutput;
while (<$channel>)
{
chomp $_;
$inerOutput = $_;
push(#cmdOutput, $_);
}
if($inerOutput ne $currPrompt)
{
sleep(7);
}
else
{
last;
}
}
return #cmdOutput;
}
I don't think your issue is blank lines. Most likely, the issue is that you are using non-blocking mode, and it takes time for the device to perform the command. So you are getting an empty line (or an undef) after "Building configuration..." is read because there is no extra output produced yet.
I would use Net::SSH2's poll method with a timeout, which will let you know when there is something to read. If "sh run" takes substantially longer than other commands you are issuing, your sendCmd method needs to be aware of this, and allow more time to pass before it decides no more output is coming its way.
Alternatively, you can (as is the custom when using, for example, Net::Telnet) wait for more output until you see the prompt, whatever the prompt is for the device in question,
and then you will know that the command has finished its execution.
Net::SSH2->poll is deprecated as result of libss2_poll deprecation
I'm running the following code to extract a list of IDs from a mysql database and store them in an array. I'm doing this exact sequence of steps in other places in my code, but in this location, something funny happens and Perl just hangs and never returns. Every where else in the code, it works fine. If I run this code as is, it runs fine, but if I uncomment the push() function, it hangs. Is this a bug?
my $result = $db->query($sqlstring)
if ($result->numrows > 0) {
my #list = ();
while (my %row = $result->fetchhash) {
my $studyid = $row{'study_id'} + 0;
print "$studyid\n";
WriteLog("Found study [" . $studyid . "]");
#push(#list,$studyid); # uncomment this to hang it
}
return \#list;
}
EDIT: I tried the data::dumper. But now the freezing occurs in a new location. In the code below, it freezes right after the very last Dumper(#list) statement. (I also switched to DBI from MySQL module, but that had no effect).
my #list = ();
my $result = $dbh->prepare($sqlstring);
$result->execute();
WriteLog($sqlstring);
if ($result->rows > 0) {
while (my $row = $result->fetchrow_hashref()) {
my $studyid = $row->{study_id};
WriteLog("Found study [" . $studyid . "]");
push #list,$studyid;
}
}
print Dumper(#list);
return \#list;
When is this freezing? Is it freezing on the first call to push, or on some subsequent call?
Are you sure you're pushing what you think you're pushing?
What you should do is to use Data::Dumper; and then do a dumper before the push.
use Data::Dumper;
use feature qw(say);
my $result = $db->query($sqlstring)
if ( $result->numrows > 0 ) {
my #list;
while ( my %row = $result->fetchhash ) {
my $studyid = $row{study_id} + 0;
say $studyid;
WriteLog("Found study [" . $studyid . "]");
say "List: " . Dumper #list;
say "Study ID: " . Dumper \$studyid;
push #list, $studyid; # uncomment this to hang it
}
return \#list;
}
Note I'm using say instead of print. Dumper can cause problems if you don't parenthesize it and use print. If you don't want to use say, then you'll need to do this:
print "Study ID: " . Dumper ( \$studyid ) . "\n";
By the way, what do you return if there are no rows?
I was going to mention that we just had this same problem with one of our scripts. The script is being run through Jenkins and the problem was that there wasn't really a problem. We just hadn't seen all of the output up to the point of the program's current state. So it looked like it was hung on a push - it was actually hung legitimately on a sleep() call a few lines later. It was fixed with the auto-flush flag which David W. mentioned. The problem was worsened by the use of Jenkins as an execution environment (vs the command-line debugger). The STDOUT/STDIN pipe opened in the Jenkins context doesn't exist in a command-line shell. So we would never see the issue from the command line - further confusing us. Once I realized that there was a communications pipe involved then things fell into place.
I am relatively new to perl programming and I am trying to figure out how open3 works. Here is the code.
#!/usr/bin/perl
use strict;
use warnings;
use IPC::Open3;
my $dir = "/home/vman/Documents/Dev/perl_scripts/Provenance/temp";
my $fileHandle;
my $bashPid;
print "Starting main program\n";
my $pid = fork();
if($pid)#Parent process2
{
print("Start transfer.\n");
$bashPid = $pid;
#Attaching an strace to the executed command which happens in the child process
open3(\*WRITE, \*READ,\*ERROR,"strace", "-f", "-F", "-e", "trace=open,execve","-p", $bashPid, "-s", "2097152","-q");
while(<READ>)
{
print("Here1\n");
print("$_");
}
while(<ERROR>)
{
print("$_");
}
print("Finish transfer.\n");
}
elsif($pid == 0)
{
if (scalar(#ARGV == 0))
{
exit
}
my $args = join(' ', #ARGV);
exec($args);
}
else
{
die("Could not fork.");
}
close(READ);
close(WRITE);
close(ERROR);
waitpid($bashPid, 0);
print "End of main program\n";
I want to run an strace on a bash process, then capture all the output while it is being outputted. Then I will take that output and parse it to see what files are being changed by which process and I will save those changes in a mysql database. For now all I am trying to do is attach an strace onto an existing bash process and get the output of that strace printed within the bash terminal that is running just to make sure that it is asynchronously reading the output.
One of the problems is that I am getting the output through the ERROR filehandle. I am a little confused on to why this is happening. Am I using the correct order for open3 and if there is an error why is the correct output even making it to stderr?
The second problem I have is that I am getting the output only when exec ends which is no good since it needs to be done while exec is running. I thought open3 runs asynchronously.
As per suggested this is what I did and it works perfectly.
#!/usr/bin/perl
use strict;
use warnings;
use IPC::Run3;
my $bashPid;
print "Starting main program\n";
my $pid = fork();
if($pid)#Parent process
{
print("Start transfer.\n");
$bashPid = $pid;
#Attaching an strace to the executed command which happens in the child process
my $command = "strace -fFe trace=open,execve -p $bashPid -s 2097152 -q";
run3($command, \*STDIN, \*STDOUT, \*STDERR);
if ($?)
{
die "something went horribly wrong";
}
while(<STDERR>)
{
print($_);
}
print("Finish transfer.\n");
}
elsif($pid == 0)#cild process
{
if (scalar(#ARGV == 0))
{
exit
}
my $args = join(' ', #ARGV);
exec($args);
}
else
{
die("Could not fork.");
}
close(STDIN);
close(STDOUT);
close(STDERR);
waitpid($bashPid, 0);
print "End of main program\n";
One of the problems is that I am getting the output through the ERROR filehandle
Correct. strace writes to STDERR.
The second problem I have is that I am getting the output only when exec ends which is no good since it needs to be done while exec is running. I thought open3 runs asynchronously.
That's because you only start reading from the child's STDERR after the child closes its STDOUT when it ends.
In fact, you're lucky you haven't deadlocked yet. By reading one at a time as you are currently, doing, you'll deadlock when strace has output enough to fill the pipe.
You need to read from both the child's STDOUT and STDERR as it comes in. You could do this using with the help of select, polling non-blocking handle or threads. None of those options are as simple as ditching open3 and using a higher-level module that handles this for you. The simpler IPC::Run3 and the fully featured IPC::Run are good choices.
One way I found is to check if the Perl Debugger is "loaded" by checking for defined($DB::single) and assuming Komodo is active, if $DB::single is defined..
But this might also mean the script is legitimately running as perl -d under the "standalone" debugger.
#!/usr/local/ActivePerl-5.10/bin/perl
use strict;
use warnings;
use feature qw/say switch/;
# detect debugger ..
SayDebugerStatus();
sub SayDebugerStatus {
print "Debugger ";
given ($DB::single) {
when (undef) {
say "not loaded.";
}
when (0) {
say "loaded but inactive";
}
default {
say "loaded and active";
}
}
return defined($DB::single) ? 1:0;
}
zakovyrya's suggestion leads to:
if ( grep( /.*Komodo\ IDE\.app/g, values %INC) ){
say "Komodo is running"
} else {
say "Komodo is not running"
};
But is there another way?
UPDATE today my isKomodo() routine failed. Some investigation showed, that IT changed my global path settings from "long" to "short" names (this is under Windows) .. there nolonger is a "KOMODO" string in the %INC hash..
I'm looking for a replacement.
What does your %INC contain when you launch script under Komodo? There is a good chance that some Komodo-specific modules are loaded.
It's better to print its content with:
use Data::Dumper;
print Dumper \%INC;
Seems like something like this is easier (for the script to know it's running under Komodo):
use Modern::Perl;
if (exists $ENV{'KOMODO_VERSION'}) {
say "Script is running under Komodo $ENV{'KOMODO_VERSION'} !";
} else {
say "script is not running in Komodo"
}
UPDATE(by 'lexu): KOMODO (7) now places KOMODO_VERSION in the environment