As mentioned in the cpan documentation of Net::Telnet:
All output is flushed while all input is buffered. Each object contains its own input buffer.
Talking crudely, is it so that the command that I am sending is referred to as the output and the result as input as it is buffered?
I have the following code for sending commands and receiving the results.
sub sendCmd {
my $t = shift #_;
my $query = shift #_;
$t->buffer_empty;
print "Sending command $query to telnet target\n";
#out = $t->cmd(String => $query,Timeout => 70,
Errmode => 'return',Prompt => '/#/');
push(#out, ${$t->buffer});
$t->buffer_empty;
return #out;
}
The problem here is that if I have a loop of the following kind:
while(1)
{
#res1 = sendCmd('cmd1');
print #res1,"\n";
#res2 = sendCmd('cmd2');
print #res2,"\n";
}
This gives the expected alternating output for some time but after that :
problem 1:i observe after a while that - the result of cmd1 appears as the result for cmd2 and vice versa.
problem 2: the command may itself appear appended to the result.
Please help me with both these problems. Meanwhile I was thinking to revert to waitfor - print combination.
Thanks.
Related
I have this PERL subroutine
sub request
{
#INPUTS
....
....
my $users_info = #_[5] ;
my $req_exp = <<DOC;
#SN# Step $number
#DN# AUTHENT $type_authent + PFS SERVICE
#ED# $reponse
GET /$jonction/testussowt/printenv HTTP/1.1
Host : $HOST
$users_info
---
DOC
return $req_exp ;
}
if i call it like this
my $test = &request(other_arguments..,'user1');
print $test ;
I get the correct output ( i am showing only the last two lines that matter)
user1
---
But if i call it like this
my $var = &other_sub_that_returns_a_string;
my $test = &request(other_arguments..,$var);
print $test;
I got this extra empty line in my output
user1
---
Without the full code, it's hard to say exactly what's happening here.
Best guess is that your subroutine is returning with a line break at the end, or pulling the line break from somewhere else.
To fix that:
my $var = &other_sub_that_returns_a_string;
chomp($var);
my $test = &request(other_arguments..,$var);
print $test;
If you post your code, I can definitely take a look and try to help you figure out where the line break is coming from.
(I have already cross posted onto another site and will update either with the solution but so far struggling with an answer)
19th Dec 2013 7:06pm PT --- I found the solution and I updated below.
I am outputting two items of data per line. The first column of data is not fixed length, and I want the second item of data to be correctly aligned in the same position each time so I am using sprintf to format the data and then mail out the data
The print command output illustrates that the data is formatted correctly.
Yet, when the output in my email is different, the alignment is all wrong.
I initially thought it was the mailer (MIME::Lite) program but I am not sure it is.
Reason why I think that is because I using eclipse Perl environment, when I look at the debug variable list, I see that the strings are padded out exactly like the output in my email, yet the print statement shows the data correctly aligned!!!
Please help me understand what is going on here and how to fix it.
use MIME::Lite;
$smtp = "mailserver";
$internal_email_address = 'myemailaddess';
$a = sprintf ("%-60s %-s\n", "the amount for Apple is ","34");
$b = sprintf ("%-60s %-s\n", "the amount for Lemons is", "7");
print $a;
print $b;
$c = $a.$b;
mailer( $internal_email_address,"issue", $c);
sub mailer {
my ( $addr, $subj, $output ) = #_;
print "$_\n" for $addr;
print "$_\n" for $subj;
print "$_\n" for $output;
$msg = MIME::Lite->new(
From => 'xxxx',
To => $addr,
Subject => $subj,
Data => $output
);
MIME::Lite->send( 'smtp', $smtp, Timeout => 60 );
eval { $msg->send };
$mailerror = "Status: ERROR email - MIME::Lite->send failed: $#\n" if $#;
if ( $mailerror eq '' ) {
$status = "Status: Mail sent\n";
}
else {
$status = $mailerror;
}
}
$a = sprintf ("%-10s %-s\n", "the amount for Apple is ","34");
The argument "the amount for Apple is" is too long for the format specifier %-10s, so the actual amount of space used for that argument will be the length of the string.
You could use a format specifier with a larger value (e.g., %-25s) that can accomodate any value you're likely to apply to it.
Or if you want sprintf to truncate the argument at 10 characters, use the format specifier %-10.10s.
Can you hook the opening of the DATA handle for a module while Perl is still compiling? And by that I mean is there a way that I can insert code that will run after Perl has opened the DATA glob for reading but before the compilation phase has ceased.
Failing that, can you at least see the raw text after __DATA__ before the compiler opens it up?
In response to Ikegami, on recent scripts that I have been working on, I have been using __DATA__ section + YAML syntax to configure the script. I've also been building up a vocabulary of YAML configuration handlers where the behavior is requested by use-ing the modules. And in some scripts that are quick-n-dirty, but not quite enough to forgo strict, I wanted to see if I could expose variables from the YAML specification.
It's been slightly annoying though just saving data in the import subs and then waiting for an INIT block to process the YAML. But it's been doable.
The file handle in DATA is none other than the handle the parser uses to read the code found before __DATA__. If that code is still being compiled, then __DATA__ hasn't been reached, then the handle hasn't been stored in DATA.
You could do something like the following instead:
open(my $data_fh, '<', \<<'__EOI__');
.
. Hunk of text readable via $data_fh
.
__EOI__
I don’t know where you want the hook. Probably in UNITCHECK.
use warnings;
sub i'm {
print "in #_\n";
print scalar <DATA>;
}
BEGIN { i'm "BEGIN" }
UNITCHECK { i'm "UNITCHECK" }
CHECK { i'm "CHECK" }
INIT { i'm "INIT" }
END { i'm "END" }
i'm "main";
exit;
__END__
Data line one.
Data line two.
Data line three.
Data line four.
Data line five.
Data line six.
Produces this when run:
in BEGIN
readline() on unopened filehandle DATA at /tmp/d line 5.
in UNITCHECK
Data line one.
in CHECK
Data line two.
in INIT
Data line three.
in main
Data line four.
in END
Data line five.
You can use any of the before runtime but after compilation blocks to change the *DATA handle. Here is a short example using INIT to change *DATA to uc.
while (<DATA>) {
print;
}
INIT { # after compile time, so DATA is opened, but before runtime.
local $/;
my $file = uc <DATA>;
open *DATA, '<', \$file;
}
__DATA__
hello,
world!
prints:
HELLO,
WORLD!
Which one of the blocks to use depends on other factors in your program. More detail about the various timed blocks can be found on the perlmod manpage.
I'm afraid not, if I got your question right. It's written in The Doc:
Note that you cannot read from the DATA filehandle in a BEGIN block:
the BEGIN block is executed as soon as it is seen (during
compilation), at which point the corresponding DATA (or END)
token has not yet been seen.
There's another way, though: read the file with DATA section as a normal text file, parse this section, then require the script file itself (which will be done at run-time). Don't know whether it'll be relevant in your case. )
perlmod says:
CHECK code blocks are run just after the initial Perl compile phase ends and before the run time begins, in LIFO order.
May be you are looking for something like this?
CHECK {
say "Reading from <DATA> ...";
while (<DATA>) {
print;
$main::count++;
};
}
say "Read $main::count lines from <DATA>";
__DATA__
1
2
3
4
5
This produces the following output:
Reading from <DATA> ...
1
2
3
4
5
Read 5 lines from <DATA>
I found out that ::STDIN actually gives me access to the stream '-'. And that I can save the current location, through tell( $inh ) and then seek() it when I'm done.
By using that method, I could read the __DATA__ section in the import sub!
sub import {
my ( $caller, $file ) = ( caller 0 )[0,1];
my $yaml;
if ( $file eq '-' ) {
my $place = tell( ::STDIN );
local $RS;
$yaml = <::STDIN>;
seek( ::STDIN, $place, 0 );
}
else {
open( my $inh, '<', $file );
local $_ = '';
while ( defined() and !m/^__DATA__$/ ) { $_ = <$inh>; }
local $RS;
$yaml = <$inh>;
close $inh;
}
if ( $yaml ) {
my ( $config ) = YAML::XS::Load( $yaml );;
no strict 'refs';
while ( my ( $n, $v ) = each %$config ) {
*{"$caller\::$n"} = ref $v ? $v : \$v;
}
}
return;
}
This worked on Strawberry Perl 5.16.2, so I don't know how portable this is. But right now, to me, this is working.
Just a background. I used to do a bit of programming with Windows Script Files. One thing I liked about the wsf format was that you could specify globally useful objects outside of the code. <object id="xl" progid="Application.Excel" />. I have always liked the look of programming by specification and letting some modular handler sort the data out. Now I can get a similar behavior through a YAML handler: excel: !ActiveX: Excel.Application.
This works for me.
The test is here, in case you're interested:
use strict;
use warnings;
use English qw<$RS>;
use Test::More;
use data_mayhem; # <-- that's my module.
is( $k, 'Excel.Application' );
is( $l[1], 'two' );
{ local $RS;
my $data = <DATA>;
isnt( $data, '' );
say $data
}
done_testing;
__DATA__
---
k : !ActiveX Excel.Application
l :
- one
- two
- three
I'm running a perl script to pull a list of about 20 text files, and parse through them. For some reason my process is bombing partway through the list, and am having trouble debugging it.
Anyone know the location of the Strawberry perl log file, and if there's a builtin max execution time, or memory limit variable like in PHP?
There are three files:
1. cron.php
2. nightly_script.php
3. myscript.pl
It successfully executes the first insert statement in that while loop, but not anymore after that. Since this is running like a cron job I don't have any output window to look at. This is why I was hoping there's a log somewhere, so if there's a syntax error, or a mysql error I can see it somewhere. Also, if I just run myscript.pl on the file in question directly, it works no problem.
cron.php
date_default_timezone_set('America/New_York');
/*
min hr dom month dow cmd
hour in 24 hour format, no leading zeros
*/
$jobsQueue = Array();
$jobsQueue[] = Array('10', '0', '*', '*', '*', 'php c:\nightly_script.php'); // These items are order dependent, so run as one script that synchronously executes each command
while(1) {
$now = time();
$min = date('i',$now);
$hr = date('G',$now);
echo "$hr:$min\n";
foreach($jobsQueue AS $job) {
if($job[0] == $min && $job[1] == $hr) {
system("$job[5]>NULL");
}
}
sleep(60);
}
?>
nightly_script.php
// Process Hand Histories
system('perl myscript.pl');
?>
myscript snippet
while ( ($key, $value) = each(%players) ) {
print "$key => $value\n";
if($value > 0)
{
$uname = $key;
$uname =~ s/player(.*)(\s*)/$1/;
$connect = DBI->connect("DBI:mysql:database=$config_mysql_db;host=$config_mysql_server",$config_mysql_user,$config_mysql_pass,{'RaiseError' => 1});
print "\n*****\n$uname\n*****\n";
$updateStatement = "INSERT statement";
$executeStatement = $connect->prepare($updateStatement);
$executeStatement->execute();
$updateStatement = "UPDATE command";
$executeStatement = $connect->prepare($updateStatement);
$executeStatement->execute();
delete $players{$key};
# Clean up the record set and the database connection
$connect->disconnect();
}
elsif($value <= 0)
{
delete $players{$key};
}
}
Since perl doesn't have a log like php, you can create your own log file by redirecting perl's stdout and stderr to a file. Try doing this by modifying the system call in nightly_script.php.
system('perl myscript.pl 1>myperllog.txt 2>&1');
or
system('perl myscript.pl 1>myperllog.txt 2>myperllog.err');
I have 2 scripts for a task.
The 1st outputs lines of data (terminated with RT/LF) to STDOUT now and then.
The 2nd keeps reading data from STDIN for further processing in the following way:
use strict;
my $dataline;
while(1) {
$dtaline = "";
$dataline = <STDIN>;
until( $dataline ne "") {
sleep(1);
$dataline = <STDIN>;
}
#further processing with a non-empty data line follows
}
print "quitting...\n";
I redirect the output from the 1st to the 2nd using pipe as following: perl scrt1 |perl scpt2.
But the problem I'm having with these 2 scpts is that it looks like that the 2nd scpt keeps getting the initial load of lines of data from the 1st scpt if there's no data anymore after the initial load.
Wonder if anybody having similar experiences can kindly help a bit?
Thanks.
You seem to be making this much more complicated than it needs to be. Perl normally uses blocking I/O, which means that <STDIN> won't return until there's a complete line of input.
use strict;
use warnings; # use this too
while (my $dataline = <STDIN>) {
#further processing with a non-empty data line follows
}
print "quitting...\n";
When there's no more input (in your example, when scrt1 exits), <STDIN> returns undef, which will exit the while loop. (Perl adds an implicit defined test to while ($var = <>) loops.)