When executing a Perl script from the command line, how can I ensure that my output doesn't scroll off the screen?
In other words, how do I mimic the functionality of the Unix more or less commands?
The Term::Pager module would seem to be what you're looking for.
The user can just pipe the output to less. That gives them the option of using their favourite pager, or even not using any pager at all, if they prefer that.
As Matti Virkkunen says, it's better that the user pipes your script to less.
A user of a Unix-like system would expect output in plain text, so (s)he can pipe it to other commands if they need to. Making your script not displaying output as plain text, your user may find your script less usable.
For a quick-and-dirty way, you can pipe the text to less or more:
my $text = <<'EOD';
Lots
and
lots
of
text
EOD
my $pager = $ENV{PAGER} || 'less';
open(my $less, '|-', $pager, '-e') || die "Cannot pipe to $pager: $!";
print $less $text;
close($less);
There are various less and more flags to allow the script to continue when it reaches the bottom of the text.
Related
in dos, i would run the following command
simread31 file.sim
and the program would ask me for 9 inputs.
Now, i'd like to do it in perl. I know i can combine echo with system in order for one input, but how to do it for more than one?
many thanks
The reason you're having trouble doing this with system() is that it's the wrong tool for the job. If you open(my $fh, '|-', 'simread31', 'file.sim') you'll be able to print or say your input to the child program's STDIN.
I'm not at my machine right now so that syntax is from memory. perldoc perlopen should provide more detail.
Windows
As Windows does not implement the list form of the piped open, you should use something like this:
open(my $fh, '|-', "simread3.exe $sim_file")
Piping input directly from Perl will often be more efficient than opening, writing, closing, piping via system(), and then cleaning up an external file.
I have a Perl script that has to wrap a PHP script that produces a lot of output, and takes about half an hour to run.
At moment I'm shelling out with:
print `$command`;
This works in the sense that the PHP script is called, and it does it's job, but, there is no output rendered by Perl until the PHP script finishes half an hour later.
Is there a way I could shell out so that the output from PHP is printed by perl as soon as it receives it?
The problem is that Perl's not going to finish reading until the PHP script terminates, and only when it finishes reading will it write. The backticks operator blocks until the child process exits, and there's no magic to make a read/write loop implicitly.
So you need to write one. Try a piped open:
open my $fh, '-|', $command or die 'Unable to open';
while (<$fh>) {
print;
}
close $fh;
This should then read each line as the PHP script writes it, and immediately output it. If the PHP script doesn't output in convenient lines and you want to do it with individual characters, you'll need to look into using read to get data from the file handle, and disable output buffering ($| = 1) on stdout for writing it.
See also http://perldoc.perl.org/perlipc.html#Using-open()-for-IPC
Are you really doing print `$command`?
If you are only running a command and not capturing any of its output, simply use system $command. It will write to stdout directly without passing through Perl.
You might want to investigate Capture::Tiny. IIRC something like this should work:
use strict;
use warnings;
use Capture::Tiny qw/tee/;
my ($stdout, $stderr, #result) = tee { system $command };
Actually, just using system might be good enough, YMMV.
There is a system() call in a Perl script with multiple pipes, using a single scalar argument. The call looks more or less like this:
system("zcat /foo.gz | grep '^.{6}X|Y|Z' | awk '{print $2,$3,$4,$6}' | bzip2 > /foo.processed.bz2");
The file in question (foo.gz) is quite large, about 2GB compressed in size. I guess that's why it was originally done via a system call.
Questions:
The problem now is, that this system call always seem to return 0, whether one of the system commands fail or not. I assume this is because it gets invoked via sh -c '...'. Is that correct?
Is there a way to check if a system() call was successful if only a single scalar argument is passed?
Is there a better way to process a large file like this, in a way thats equally or more efficient (in terms of speed mainly)?
Thanks for any hints as I am not really familiar with Perl.
Two things:
When you do a system call, the value returned is the last value in the pipeline. Thus, you're getting the status code of the bzip2 command.
The reason the program is doing this is because the people who wrote the program probably didn't know any better. I've seen Perl programs use system calls for finding the basename of the file, doing a find, and even doing a copy/rename/move. These are all things that can be done faster and easier inside the Perl program. And, you don't have the whole Windows/Unix compatibility issues.
You're always better off using Perl modules for things like this. In this case, I bet the Perl modules will be even faster than the shell pipeline, and you'll have more control over the entire operation.
There's a set called IO::Compress that can handle both Zip and BZip2.
I use Archive::Zip which is a great module, but you want to use the Bzip2 compression algorithm, and Archive::Zip can't handle that.
system() returns what the /bin/sh shell returns. When multiple commands are pipelined, the shell forks a new process for each of them and the status code of the last command in the chain is returned, in this case bzip2.
Based on your comments and answers, I'd do it like that now:
$infile =~ s/(.*\.gz)\s*$/gzip -dc < $1|/;
open(OUTFH, "| /bin/bzip > $outfile") or die "Can't open $outfile: $!";
open(INFH, $infile) or die "Can't open $infile: $!";
while (my $line = <INFH>) {
if ($line =~ /^.{6}X|Y|Z) {
# TODO: the awk part...
print OUTFH $line;
}
}
close(INFH);
close(OUTFH);
Please feel free to comment and vote up/down.
You'd be better doing the text processing from within perl itself - that's what perl's for :)
system() only ever returns 0 or 1. To capture actual output, try calling it via backticks: `command` rather than system('command')
I have a bash script which produces different integer values. When I run that script, the output looks like this:
12
34
34
67
6
This script runs on a Solaris server. In order to provide other users in the network with these values, I decided to write a Perl script which can:
run the bash file
read its output
build a tiny html page with a table in which the bash values are stored
Thats a hard job for me because I have almost no experience with Perl. I know I can use system to execute unix commands (and bash files) but I cannot get the output. I also heared about qx which sounds very useful for my case.
But I must admit I have no clue how do start... Could you give me a few hints how to solve that?
With a question like this it's a little hard to know where to begin.
The qx to which you are referring is a feature of Perl. The "q*" or "Quote and Quote-like Operators" are documented in the Perl "operators" man page (normally you'd use man perlop to read that on systems with a conventional installation of Perl).
Specifically qx is the "quoted-execution of a command" ... which is essentially an alternative form of the ` (back tick or "command substitution") operator in Perl.
In other words if you execute a command like:
perl -e '$foo = qx{ls}; print "\n###\n$foo\n###\n";'
... on a system with Perl installed then it should run Perl, which should evaluate (-e) the expression you've provided (quoted). In other words we're writing a small program right on the command line. This program starts by creating a variable whose contents will be a "scalar" (which is Perl terminology for a string or number). We're assigning (the =, or assignment, operator) the output which is captured by executing the ls command back to this variable ($foo). After that we're printing the contents of our variable (whatever the ls command would have printed) with ### lines preceding and following those contents..
A quirk of Perl's qx operator (and the various other q* operators) is that it allows you to delimit the command with just about any characters you like. For example perl -e '$bar = qx/pwd/;' would capture the output of the pwd command. When you use any of the characters that are normally used as delimiters around text parentheses, braces, brackets, etc) then the qx command will look for the appropriate matching delimiter. If you use any other punctuation (or non-alpha-numeric character?) then that same character will be the terminating delimiter as well. This later behavior is similar to, and was inspired by, a feature in "substitution" command from the old sed utility and ed line editors; while the matching of parentheses, braces, etc. are a Perl novelty.
So that's the basics of how to capture your shell script's output. To print the numbers in an HTML table you'd have to split the captured output into separate lines (saving them into a list or array) then print your HTML prologue (the <table> and <th> (header) tags, and so on) ... them loop over a series of <tr> rows, interpolating your numbers into <td>> (table data) containers) and then finally print your HTML epilogue (with the closing tags).
For that you'll want to read up on the Perl print function and about "interpolation" in Perl. That's a fairly complex topic.
This is all extremely crude and there are tools around which allow you to approach the generation of HTML at a much higher level. It's also rather dubious that you want to wrap the execution of your shell script in a Perl script since it seems likely that you could modify the shell script to directly output HTML (perhaps as an option controlled by a command line switch or environment variable) or that you could re-write the shell script in Perl. This could potentially eliminate the extra work of parsing the output (splitting it into lines and separating the values out of those lines into an array because you can capture the data directly into the array (or possibly print out your HTML rows) directly as you are generating them (however your existing shell script is doing that).
To capture the output of your bash file, you can use the backtick operator:
use strict;
my $e = `ls`;
print $e;
Many, many thanks to you! With your great help. I was able to build a perl script which does a big part of the job.
This is what I have created so far:
#!/usr/bin/perl -w
use strict;
use CGI qw(:standard);
#some variables
my $message = "please wait, loading data...\n";
#First build the web page
print header;
print start_html('Hello World');
print "<H1>we need love, peace and harmony</H1>\n";
print "<p>$message</p>\n";
#Establish a pipeline between the bash and my script.
my $bash_command = '/love/peace/harmony/./lovepeace.bash';
open(my $pipe, '-|', $bash_command) or die $!;
while (my $line = <$pipe>){
# Do something with each line.
print "<p>$line</p>\n";
}
#job done, now refresh page?
print end_html;
When I call that .pl script in my browser, everything works nice :-) But a few questions are still on my mind:
When I call this website, it is busy loading the values from the pipe. Since there are about 10 Values its rather
quick (2-4 seconds) But if I have 100+ Values the user has to wait a while. Since I cannot have a progress bar, I
should give an information to the user. Like:
"Loading data, please wait..."
And when the job is done, this message should say: "Job done" or something similar.
But how do I realize if the process is finnished?
can I reload the page if the job is done ?
Is there any chance of using my own stylesheet wihtin this perl-CGI
Regards,
JJ
Why only perl:
you can use awk for that in side your shell script itself.
I have done this earlier.
if you have the out put values in a variable then use the below method:
echo $SUBSCRIBERS|awk 'BEGIN {
print "<?xml version=\"1.0\" encoding=\"UTF-8\"?><GenTransactionHandler xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"><EntityToPublish>\n<Entitytype=\"C\" typeDesc=\"Subscriber level\"><TargetApplCode>UHUNLD</TargetApplCode><TrxName>GET_SUBSCR_DATA</TrxName>"
}
{for(i=1;i<NF+1;i++) printf("<value>%d</value>\n",$i)}
END{
print "</Entity>\n</EntityToPublish></GenTransactionHandler>"}' >XML_SUB_NUM`date +%Y%m%d%H%M%S`.xml
in $SUBSCRIBERS the values should eb tab separated.
I can setup a telnet connection in Perl no problems, and have just discovered Curses, and am wondering if I can use the two together to scrape the output from the telnet session.
I can view on a row, column basis the contents of STDOUT using the simple script below:
use Curses;
my $win = new Curses;
$win->addstr(10, 10, 'foo');
$win->refresh;
my $thischar=$win->inch(10,10);
print "Char $thischar\n";
And using the below I can open a telnet connection and send \ receive commands with no problem:
use net::telnet;
my $telnet = new Net::Telnet (Timeout => 9999,);
$telnet->open($ipaddress) or die "telnet open failed\n";
$telnet->login($user,$pass);
my $output = $telnet->cmd("command string");
... But what I would really like to do is get the telnet response (which will include terminal control characters) and then search on a row \ column basis using curses. Does anyone know of a way I can connect the two together? It seems to me that curses can only operate on STDOUT
Curses does the opposite. It is a C library for optimising screen updates from a program writing to a terminal, originally designed to be used over a slow serial connection. It has no ability to scrape a layout from a sequence of control characters.
A better bet would be a terminal emulator that has an API with the ability to do this type of screen scraping. Off the top of my head I'm not sure if any Open-source terminal emulators do this, but there are certainly commercial ones available that can.
If you are interacting purely with plain-text commands and responses, you can use Expect to script that, otherwise, you can use Term::VT102, which lets you screen scrape (read specific parts of the screen, send text, handle events on scrolling, cursor movement, screen content changes, and others) applications using VT102 escape sequences for screen control (e.g., an application using the curses library).
You probably want something like Expect
use strict;
use warnings;
use Expect;
my $exp = Expect->spawn("telnet google.com 80");
$exp->expect(15, #timeout
[
qr/^Escape character.*$/,
sub {
$exp->send("GET / HTTP/1.0\n\n");
exp_continue;
}
]
);
You're looking for Term::VT102, which emulates a VT102 terminal (converting the terminal control characters back into a virtual screen state). There's an example showing how to use it with Net::Telnet in VT102/examples/telnet-usage.pl (the examples directory is inside the VT102 directory for some reason).
It's been about 7 years since I used this (the system I was automating switched to a web-based interface), but it used to work.
Or you could use the script command for this.
From the Solaris man-page:
DESCRIPTION
The script utility makes a record of everything printed
on your screen. The record is written to filename. If no file name
is given, the record is saved in the file typescript...
The script command forks and creates a
sub-shell, according to the value of
$SHELL, and records the text from this
session. The script ends when the
forked shell exits or when
Control-d is typed.
I would vote also for the Expect answer. I had to do something similar from a gui'ish application. The trick (albeit tedious) to get around the control characters was to strip all the misc characters from the returned strings. It kind of depends on how messy the screen scrape ends up being.
Here is my function from that script as an example:
# Trim out the curses crap
sub trim {
my #out = #_;
for (#out) {
s/\x1b7//g;
s/\x1b8//g;
s/\x1b//g; # remove escapes
s/\W\w\W//g;
s/\[\d\d\;\d\dH//g; #
s/\[\?25h//g;
s/\[\?25l//g;
s/\[\dm//g;
s/qq//g;
s/Recall//g;
s/\357//g;
s/[^0-9:a-zA-Z-\s,\"]/ /g;
s/\s+/ /g; # Extra spaces
}
return wantarray ? #out : $out[0];
}