Display contents in the GUI using tcl - perl

I am new to GUI and i was trying to create a simple GUI in tcl. It have a push button which when pressed runs a code and generates a output '.l' file in the directory. But i want the output to be printed in the GUI itself. SO how am i supposed to change this code to do the task.
proc makeTop { } {
toplevel .top ;#Make the window
#Put things in it
label .top.lab -text "This is output Window" -font "ansi 12 bold"
text .top.txt
.top.txt insert end "XXX.l"
#An option to close the window.
button .top.but -text "Close" -command { destroy .top }
#Pack everything
pack .top.lab .top.txt .top.but
}
label .lab -text "This is perl" -font "ansi 12 bold"
button .but -text "run perl" -command { exec perl run_me }
pack .lab .but
Can anyone help me in displaying the contents of output file XXX.l in the GUI itself???

For simple programs that just prints their results to stdout, then it's simple: exec returns all standard output of the program. So you just need to read the return value of your exec call:
proc exec_and_print {args} {
.top.txt insert end [exec {*}$args]
}
But remember, exec only returns after the program have exited. For long running programs where you want the output to appear immediately in your text box you can use open. If the first character of the file name passed to open is | then open assumes that the string is a command line to be executed. With open you get an i/o channel that you can continuously read from:
proc long_running_exec {args} {
set chan [open "| $args"]
# disable blocking to prevent read from freezing our UI:
fconfigure $chan -blocking 0
# use fileevent to read $chan only when data is available:
fileevent $chan readable {
.top.text insert end [read $chan]
# remember to clean up after ourselves if the program exits:
if {[eoc $chan]} {
close $chan
}
}
}
The long_running_exec function above returns immediately and uses events to read the output. This allows your GUI to continue functioning instead of freezing while the external program runs. To use it simply do:
button .but -text "run perl" -command { long_running_exec perl run_me }
Additional answer:
If the program generates a file as output and you want to simply display the contents of the file then just read the file:
proc exec_and_print {args} {
exec {*}$args
set f [open output_file]
.top.txt insert end [read $f]
close $f
}
If you know where the file is generated but don't know the exact file name then read the manual for glob on how to get a list of directory contents.

Related

enable vi command line editing from .perldb rc afterinit in perl debugger

Whenever I start the perl debugger from a script with a -d option, the session starts with emacs command line editing. I then type ESC ctrl J to enable vi editing. I want to enable vi from the get-go.
I tried using the following .perldb:
&parse_options("HistFile=.perlDebugHist");
sub afterinit { push #DB::typeahead, ("o inhibit_exit", chr(27).chr(10)) }
but when the session starts, it says
auto(-2) DB<62> o inhibit_exit
inhibit_exit = '1'
auto(-1) DB<63>
Unrecognized character \x1B; marked by <-- HERE after :db_stop;
<-- HERE near column 96 at (eval 9)[/usr/share/perl/5.22/perl5db.pl:737] line 2.
at (eval 9)[/usr/share/perl/5.22/perl5db.pl:737] line 2.
eval 'no strict; ($#, $!, $^E, $,, $/, $\\, $^W) = #DB::saved;package main; $^D = $^D | $DB::db_stop;
;
' called at /usr/share/perl/5.22/perl5db.pl line 737
DB::eval called at /usr/share/perl/5.22/perl5db.pl line 3110
DB::DB called at ~/bin/debug.pl line 61
Here is a possible workaround that assumes you use the gnu readline library:
Create a file called perldb_inputrc in the current directory with content:
set editing-mode vi
Then change the afterinit() sub to:
sub afterinit {
if (!$DB::term) {
DB::setterm();
}
$DB::term->read_init_file('perldb_inputrc');
push #DB::typeahead, "o inhibit_exit";
}
See perldoc perl5db for more information.
Update:
A simpler approach is to the readline init file. You can use a global file ~/.inputrc or a use a local one for the current debugging session only by setting the environment variable INPUTRC. For example, using the above perldb_inputrc file as an example, you could use (in your .perldb init file):
sub afterinit { push #DB::typeahead, "o inhibit_exit" }
and then run the Perl script like this:
INPUTRC=./perldb_inputrc perl -d myscript.pl

How to force Perl scripts quit when it is running in loops?

I use Perl to analyze my research data (multiple large files which may be edited or modified by users while program is running).
In my program, there are scripts to check whether the file is complete or not before it analyze the data in one of files. This check is processed in multiple loops. If I simply use "exit", it only exit a loop. How can I force the scripts to quit and generate an error message for user before it quit? In my program, there is a defined variable which be output to a log file at the end of the program. I do NOT want to use GOTO command. Any further information is highly appreciated.
......
foreach $dir (#dirs)
{
...
$file="$dir$filename";
$file_size=`wc -l $file`;
$line=`grep -n TIMESTEP $file`;
#read the first line no of each frame in a data file
#values=split(/\r?\n/,$line);
$loop_i=0;
$tmp=0; #save line no for the first frame
foreach $sub_line (#values)
{
#sub_values=split(/:/,$sub_line);
$line_no[$loop_i]=$sub_values[0];
#check the line number in each frame same or not, if not quit
if($loop_i==1){$tmp=$line_no[$loop_i]-$line_no[$loop_i-1];}
elsif($loop_i>1)
{ $_=$line_no[$loop_i]-$line_no[$loop_i-1];
if($_ <> $tmp)
{$flag=0; $err_message="$err_message; incomplete data (each frame has different line number)";
exit; #cannot quit the whole program
}
}
else{;}
$loop_i++;
}#end foreach $sub_line (#values)
.....
}#end foreach $dir (#dirs)
....
I think what you want are loop controls. You can use next, last, or redo to break out of a loop early, stop a loop completely, or process the same iteration again. With nested loops you can use a label to specify which loop you want to control:
DIR: foreach my $dir ( ... ) {
...
LINE: foreach my $line ( ... ) {
next LINE if $skip_line;
last LINE if ...;
next DIR if ...;
}
}

How to determine if shell command didn't run or produced no output

I am executing some shell commands via a perl script and capturing output, like this,
$commandOutput = `cat /path/to/file | grep "some text"`;
I also check if the command ran successfully or not like this,
if(!$commandOutput)
{
# command not run!
}
else
{
# further processing
}
This usually works and I get the output correctly. The problem is, in some cases, the command itself does not produce any output. For instance, sometimes the text I am trying to grep will not be present in the target file, so no output will be provided as a result. In this case, my script detects this as "command not run", while its not true.
What is the correct way to differentiate between these 2 cases in perl?
you can use this to know whether the command failed or the command return nothing
$val = `cat text.txt | grep -o '[0-9]*'`;
print "command failed" if (!$?);
print "empty string" if(! length($val) );
print "val = $val";
assume that text.txt contain "123ab" from which you want to get number only.
Use $? to check if the command executed successfully: see backticks do not return any value in perl for an example.
If you're not piping to |grep you can check $? for more specific exit status,
my $commandOutput = `grep "some text" /path/to/file`;
if ($? < 0)
{
# command not run!
}
elsif ($? >> 8 > 1)
{
# file not found
}
else
{
# further processing
}

Capture the output of Perl's 'system()'

I need to run a shell command with system() in Perl. For example,
system('ls')
The system call will print to STDOUT, but I want to capture the output into a variable so that I can do future processing with my Perl code.
That's what backticks are for. From perldoc perlfaq8:
Why can't I get the output of a command with system()?
You're confusing the purpose of system() and backticks (``). system()
runs a command and returns exit status information (as a 16 bit value:
the low 7 bits are the signal the process died from, if any, and the
high 8 bits are the actual exit value). Backticks (``) run a command
and return what it sent to STDOUT.
my $exit_status = system("mail-users");
my $output_string = `ls`;
See perldoc perlop for more details.
IPC::Run is my favourite module for this kind of task. Very powerful and flexible, and also trivially simple for small cases.
use IPC::Run 'run';
run [ "command", "arguments", "here" ], ">", \my $stdout;
# Now $stdout contains output
Simply use similar to the Bash example:
$variable=`some_command some args`;
That's all. Notice, you will not see any printings to STDOUT on the output because this is redirected to a variable.
This example is unusable for a command that interact with the user, except when you have prepared answers. For that, you can use something like this using a stack of shell commands:
$variable=`cat answers.txt|some_command some args`;
Inside the answers.txt file you should prepare all answers for some_command to work properly.
I know this isn't the best way for programming :) But this is the simplest way how to achieve the goal, specially for Bash programmers.
Of course, if the output is bigger (ls with subdirectory), you shouldn't get all output at once. Read the command by the same way as you read a regular file:
open CMD,'-|','your_command some args' or die $#;
my $line;
while (defined($line=<CMD>)) {
print $line; # Or push #table,$line or do whatever what you want processing line by line
}
close CMD;
An additional extended solution for processing a long command output without extra Bash calling:
my #CommandCall=qw(find / -type d); # Some example single command
my $commandSTDOUT; # File handler
my $pid=open($commandSTDOUT),'-|'); # There will be an implicit fork!
if ($pid) {
#parent side
my $singleLine;
while(defined($singleline=<$commandSTDOUT>)) {
chomp $line; # Typically we don't need EOL
do_some_processing_with($line);
};
close $commandSTDOUT; # In this place $? will be set for capture
$exitcode=$? >> 8;
do_something_with_exit_code($exitcode);
} else {
# Child side, there you really calls a command
open STDERR, '>>&', 'STDOUT'; # Redirect stderr to stdout if needed. It works only for child - remember about fork
exec(#CommandCall); # At this point the child code is overloaded by an external command with parameters
die "Cannot call #CommandCall"; # Error procedure if the call will fail
}
If you use a procedure like that, you will capture all procedure output, and you can do everything processing line by line. Good luck :)
I wanted to run system() instead of backticks because I wanted to see the output of rsync --progress. However, I also wanted to capture the output in case something goes wrong depending on the return value. (This is for a backup script). This is what I am using now:
use File::Temp qw(tempfile);
use Term::ANSIColor qw(colored colorstrip);
sub mysystem {
my $cmd = shift; # "rsync -avz --progress -h $fullfile $copyfile";
my ($fh, $filename) = tempfile();
# http://stackoverflow.com/a/6872163/2923406
# I want to have rsync progress output on the terminal AND capture it in case of error.
# Need to use pipefail because 'tee' would be the last cmd otherwise and hence $? would be wrong.
my #cmd = ("bash", "-c", "set -o pipefail && $cmd 2>&1 | tee $filename");
my $ret = system(#cmd);
my $outerr = join('', <$fh>);
if ($ret != 0) {
logit(colored("ERROR: Could not execute command: $cmd", "red"));
logit(colored("ERROR: stdout+stderr = $outerr", "red"));
logit(colored("ERROR: \$? = $?, \$! = $!", "red"));
}
close $fh;
unlink($filename);
return $ret;
}
# And logit() is something like:
sub logit {
my $s = shift;
my ($logsec, $logmin, $loghour, $logmday, $logmon, $logyear, $logwday, $logyday, $logisdst) = localtime(time);
$logyear += 1900;
my $logtimestamp = sprintf("%4d-%02d-%02d %02d:%02d:%02d", $logyear, $logmon+1, $logmday, $loghour, $logmin, $logsec);
my $msg = "$logtimestamp $s\n";
print $msg;
open LOG, ">>$LOGFILE";
print LOG colorstrip($msg);
close LOG;
}

How can Perl's system() print the command that it's running?

In Perl, you can execute system commands using system() or `` (backticks). You can even capture the output of the command into a variable. However, this hides the program execution in the background so that the person executing your script can't see it.
Normally this is useful but sometimes I want to see what is going on behind the scenes. How do you make it so the commands executed are printed to the terminal, and those programs' output printed to the terminal? This would be the .bat equivalent of "#echo on".
I don't know of any default way to do this, but you can define a subroutine to do it for you:
sub execute {
my $cmd = shift;
print "$cmd\n";
system($cmd);
}
my $cmd = $ARGV[0];
execute($cmd);
And then see it in action:
pbook:~/foo rudd$ perl foo.pl ls
ls
file1 file2 foo.pl
As I understand, system() will print the result of the command, but not assign it. Eg.
[daniel#tux /]$ perl -e '$ls = system("ls"); print "Result: $ls\n"'
bin dev home lost+found misc net proc sbin srv System tools var
boot etc lib media mnt opt root selinux sys tmp usr
Result: 0
Backticks will capture the output of the command and not print it:
[daniel#tux /]$ perl -e '$ls = `ls`; print "Result: $ls\n"'
Result: bin
boot
dev
etc
home
lib
etc...
Update: If you want to print the name of the command being system() 'd as well, I think Rudd's approach is good. Repeated here for consolidation:
sub execute {
my $cmd = shift;
print "$cmd\n";
system($cmd);
}
my $cmd = $ARGV[0];
execute($cmd);
Use open instead. Then you can capture the output of the command.
open(LS,"|ls");
print LS;
Here's an updated execute that will print the results and return them:
sub execute {
my $cmd = shift;
print "$cmd\n";
my $ret = `$cmd`;
print $ret;
return $ret;
}
Hmm, interesting how different people are answering this different ways. It looks to me like mk and Daniel Fone interpreted it as wanting to see/manipulate the stdout of the command (neither of their solutions capture stderr fwiw). I think Rudd got closer. One twist you could make on Rudd's response is to overwite the built in system() command with your own version so that you wouldn't have to rewrite existing code to use his execute() command.
using his execute() sub from Rudd's post, you could have something like this at the top of your code:
if ($DEBUG) {
*{"CORE::GLOBAL::system"} = \&{"main::execute"};
}
I think that will work but I have to admit this is voodoo and it's been a while since I wrote this code. Here's the code I wrote years ago to intercept system calls on a local (calling namespace) or global level at module load time:
# importing into either the calling or global namespace _must_ be
# done from import(). Doing it elsewhere will not have desired results.
delete($opts{handle_system});
if ($do_system) {
if ($do_system eq 'local') {
*{"$callpkg\::system"} = \&{"$_package\::system"};
} else {
*{"CORE::GLOBAL::system"} = \&{"$_package\::system"};
}
}
Another technique to combine with the others mentioned in the answers is to use the tee command. For example:
open(F, "ls | tee /dev/tty |");
while (<F>) {
print length($_), "\n";
}
close(F);
This will both print out the files in the current directory (as a consequence of tee /dev/tty) and also print out the length of each filename read.