Perl - custom keystroke handlers - perl

I'm trying to implement custom handlers for given keystrokes so that I can change mode when my script is fetching data from file. How is that possible without any WHILE loop?
I was looking into Term::ReadKey but I dont think it does what I need. Maybe I should connect it with something though I can't find any solution on google.
I've just started with perl scripting :)

Here is an example of how to avoid busy waiting when waiting for a keyboard input:
use strict;
use warnings;
use IPC::Open2;
my $pid1 = run_cmd('read_key');
my $pid2 = run_cmd('counter');
print "Master: waiting for keyboard event..\n";
waitpid $pid1, 0;
print "Master: Done.\n";
kill 'TERM', $pid2;
sub run_cmd {
my ($cmd) = #_;
open(OUT, ">&STDOUT") or die "Could not duplicate STDOUT: $!\n";
open(IN, ">&STDIN") or die "Could not duplicate STDIN: $!\n";
my $pid = open2('>&OUT', '<&IN', $cmd);
return $pid;
}
where read_key is:
use strict;
use warnings;
use Term::ReadKey;
ReadMode 4;
END { ReadMode 0 }
my $key = ReadKey(0);
print "$key\n";
and counter is:
use strict;
use warnings;
$SIG{TERM} = sub { die "Child (counter): Caught a sigterm. Abort.\n" };
my $i = 0;
while (++$i) {
sleep 1;
print "$i\n";
}
Example output:
Name "main::IN" used only once: possible typo at ./p.pl line 19.
Name "main::OUT" used only once: possible typo at ./p.pl line 18.
Master: waiting for keyboard event..
1
2
3
q
Master: Done.
Child (counter): Caught a sigterm. Abort.

Related

Perl file handle with scalar target and IPC::Open3

My task is to start a program from perl and keep control flow to the starting task as well as capturing the output of the program in scalar variables. The script should only use perl modules available in perl base package.
My first approach was
use POSIX;
use IPC::Open3;
use strict;
my ($invar, $outvar, $errvar, $in, $out, $err, $pid, $pidr);
open($in, "<",\$invar);
open($out, ">",\$outvar);
open($err, ">",\$errvar);
my $cmd = "sleep 5; echo Test; sleep 5; echo Test; sleep 5;";
$pid = open3($in, $out, $err, $cmd);
my $num = 0;
for($pidr = $pid; $pidr >= 0;)
{
sleep(1) if ($pidr = waitpid($pid, WNOHANG)) >= 0;
print "$num: $outvar" if $outvar;
++$num;
}
close($in);
close($out);
close($err);
When started nothing happens. The output of the started program does not go into $outvar. To test if my basic idea fails I tried this:
my $outvar = "";
my $out;
open($out, ">", \$outvar);
print $out "Test\n";
print "--1--\n$outvar";
print $out "Test2\n";
print "--2--\n$outvar";
which correctly outputs as expected:
--1--
Test
--2--
Test
Test2
The question is: Why does the above program not work as the test example and output the text which should be in $outvar?
A working solution is much more complex (when you add all the security checks left out for this example):
use POSIX;
use IPC::Open3;
use strict;
my ($invar, $outvar, $errvar, $in, $out, $err, $pid, $pidr);
open($in, "<",\$invar);
open($out, ">",\$outvar);
open($err, ">",\$errvar);
my $cmd = "sleep 5; echo Test; sleep 5; echo Test; sleep 5;";
$pid = open3($in, $out, $err, $cmd);
my $num = 0;
for($pidr = $pid; $pidr >= 0;)
{
sleep(1) if ($pidr = waitpid($pid, WNOHANG)) >= 0;
my $obits; vec($obits, fileno($out), 1) = 1;
if(select($obits, undef, undef, 0) > 0)
{
my $buffer;
sysread($out, $buffer, 10240);
print "$num: $buffer" if $buffer;
}
++$num;
}
close($in);
close($out);
close($err);
It correctly prints (as should do the first one as well in similar way):
5: Test
10: Test
For the examples I removed much of the error handling and the similar code for STDERR.
open($out, ">", \$outvar); does not create a system file handle. You'll notice that fileno($out) is -1. One process can't write to another's memory, much less manipulate its variables, so the whole concept is flawed. Use IPC::Run3 or IPC::Run; they effectively implement the select loop for you. open3 is far too low-level for most applications.
The second snippet works because open3 is behaving as if $in, $out and $err are unopened handles. You might as well have done
$in = gensym();
$out = gensym();
$err = gensym();
But again, you should be using IPC::Run3 or IPC::Run. Your hand-rolled version suffers from some bugs, including one that can lead to a deadlock.

Executing a multiple indefinite perl script within another perl script

My intention is to execute long.pl perl script with different path as an argument and since long.pl has indefinite loop such that in the main script it does not come to second path. I thought to use fork for doing it, but I'm not sure whether it will solve my problem or not!
Some information on the method of achieving the task would be helpful, and please let me know if you need any clarification on the problem statement.
#!/usr/bin/perl
use strict;
use warnings;
print localtime () . ": Hello from the parent ($$)!\n";
my #paths = ('C:\Users\goudarsh\Desktop\Perl_test_scripts','C:\Users\goudarsh\Desktop\Perl_test_scripts/rtl2gds');
foreach my $path(#paths){
my $pid = fork;
die "Fork failed: $!" unless defined $pid;
unless ($pid) {
print localtime () . ": Hello from the child ($$)!\n";
exec "long.pl $path"; # Some long running process.
die "Exec failed: $!\n";
}
}
long.pl
#!/usr/bin/perl
use strict;
use warnings;
while(1){
sleep 3;
#do some stuff here
}
Example run:
$ perl my_forker.pl
Done with other process.
Done with long running process.
Done with main process.
The following files must have executable permissions set:
long_running.pl:
#!/usr/bin/env perl
use strict;
use warnings;
use 5.020;
sleep 5;
say 'Done with long running process.';
other_process.pl:
#!/usr/bin/env perl
use strict;
use warnings;
use 5.020;
sleep 3;
say "Done with other process."
my_forker.pl:
use strict;
use warnings;
use 5.020;
my #paths = (
'./long_running.pl',
'./other_process.pl',
);
my #pids;
for my $cmd (#paths) {
defined (my $pid = fork()) or die "Couldn't fork: $!";
if ($pid == 0) { #then in child process
exec $cmd;
die "Couldn't exec: $!"; #this line will cease to exist if exec() succeeds
}
else { #then in parent process, where $pid is the pid of the child
push #pids, $pid;
}
}
for my $pid (#pids) {
waitpid($pid, 0) #0 => block
}
say "Done with main process.";

Signal handling in perl

use strict;
use warnings;
print "hello\n";
sleep(10);
print "print before alarm\n";
alarm 5;
$SIG{'ALRM'} = \&alarm;
$SIG{'INT'} = \&alarm;
$SIG{'TERM'} = \&alarm;
sub alarm {
print "alarm signal hanndled\n";
}
I am not able to handle signals either for alarm or for pressing ctrl+c. I searched and find this as a way to do signal handling. What am i doing wrong?
First, set the handlers. Then, set the alarm. Last, do the time-consuming operation (sleep). You have it upside down:
#! /usr/bin/perl
use strict;
use warnings;
$SIG{'ALRM'} = \&alarm;
$SIG{'INT'} = \&alarm;
$SIG{'TERM'} = \&alarm;
print "hello\n";
alarm 5;
sleep(10);
print "after sleep\n";
sub alarm {
print "alarm signal handled\n";
}
Rather than duplicating the handler for each SIG, it's neater to use sigtrap.
This example prevents the user from killing, but you can do whatever you want in the handler. Of course, if your handler does an 'exit' you also have the END sub to do additional stuff as well.
use strict;
use warnings;
use sigtrap 'handler' => \&sig_handler, qw(INT TERM KILL QUIT);
sub sig_handler {
my $sig_name = shift;
print "You can't kill me! $sig_name caught";
}
for (1 .. 10) {
print $_ . "\n";
sleep 1;
}
Alternatively you can use the existing signal lists...
use sigtrap 'handler' => \&sig_handler, qw(any normal-signals error-signals stack-trace);
see perldoc sigtrap for more info

Process hanging -SIGALRM not delivered- Perl

I have a command that I'm executing using OPEN with pipe, and I want to set a timeout of 10 seconds and have the sub process aborted if the execution time exceeds this. However, my code just causes the program to hang- Why is my ALARM not getting delivered properly?
my $pid = 0;
my $cmd = "someCommand";
print "Running Command # $num";
eval {
local $SIG{ALRM} = sub {
print "alarm \n";
kill 9, $pid;
};
alarm 10;
pid = open(my $fh, "$cmd|");
alarm 0;
};
if($#) {
die unless $# eq "alarm \n";
} else {
print $_ while(<$fh>);
}
EDIT:
So From the answers below, This is what I have:
my $pid = open(my $fh, qq(perl -e 'alarm 10; exec \#ARGV; die "exec: $!\n" ' $cmd |));
print $_ while(<$fh>);
But this print ALARM CLOCK to the console when the alarm times out...whereas I dont specify this anywhere in the code...how can I get rid of this, and where would I put the custom alarm event handler?
Thanks!
I want to set a timeout of 10seconds and have the sub process aborted if the execution time exceeds this
A different approach is to set the alarm on the subprocess itself, with a handy scripting language you already have:
my $cmd = "someCommand";
my $pid = open(my $child_stdout, '-|',
'perl', '-e', 'alarm 10; exec #ARGV; die "exec: $!"', $cmd);
...
Your child process will initially be perl (well, the shell and then perl), which will set an alarm on itself and then exec (replace itself with) $someCommand. Pending alarms, however, are inherited across exec()s.
All your code is doing is setting a 10 second timeout on the open call, not on the whole external program. You want to bring the rest of your interaction with the external command into the eval block:
eval {
local $SIG{ALRM} = sub {
print "alarm \n";
kill 9, $pid;
};
alarm 10;
$pid = open(my $fh, "$cmd|");
print while <$fh>;
close $fh;
alarm 0;
};
if($#) {
die unless $# eq "alarm \n";
}

Read and Write in the same file with different process

I have written the two program. One program is write the content to the text file simultaneously. Another program is read that content simultaneously.
But both the program should run at the same time. For me the program is write the file is correctly. But another program is not read the file.
I know that once the write process is completed than only the data will be stored in the hard disk. Then another process can read the data.
But I want both read and write same time with different process in the single file. How can I do that?
Please help me.
The following code write the content in the file
sub generate_random_string
{
my $length_of_randomstring=shift;# the length of
# the random string to generate
my #chars=('a'..'z','A'..'Z','0'..'9','_');
my $random_string;
foreach (1..$length_of_randomstring)
{
# rand #chars will generate a random
# number between 0 and scalar #chars
$random_string.=$chars[rand #chars];
}
return $random_string;
}
#Generate the random string
open (FH,">>file.txt")or die "Can't Open";
while(1)
{
my $random_string=&generate_random_string(20);
sleep(1);
#print $random_string."\n";
print FH $random_string."\n";
}
The following code is read the content. This is another process
open (FH,"<file.txt") or die "Can't Open";
print "Open the file Successfully\n\n";
while(<FH>)
{
print "$_\n";
}
You might use an elaborate cooperation protocol such as in the following. Both ends, reader and writer, use common code in the TakeTurns module that handles fussy details such as locking and where the lock file lives. The clients need only specify what they want to do when they have exclusive access to the file.
reader
#! /usr/bin/perl
use warnings;
use strict;
use TakeTurns;
my $runs = 0;
reader "file.txt" =>
sub {
my($fh) = #_;
my #lines = <$fh>;
print map "got: $_", #lines;
++$runs <= 10;
};
writer
#! /usr/bin/perl
use warnings;
use strict;
use TakeTurns;
my $n = 10;
my #chars = ('a'..'z','A'..'Z','0'..'9','_');
writer "file.txt" =>
sub { my($fh) = #_;
print $fh join("" => map $chars[rand #chars], 1..$n), "\n"
or warn "$0: print: $!";
};
The TakeTurns module is execute-around at work:
package TakeTurns;
use warnings;
use strict;
use Exporter 'import';
use Fcntl qw/ :DEFAULT :flock /;
our #EXPORT = qw/ reader writer /;
my $LOCKFILE = "/tmp/taketurns.lock";
sub _loop ($&) {
my($path,$action) = #_;
while (1) {
sysopen my $lock, $LOCKFILE, O_RDWR|O_CREAT
or die "sysopen: $!";
flock $lock, LOCK_EX or die "flock: $!";
my $continue = $action->();
close $lock or die "close: $!";
return unless $continue;
sleep 0;
}
}
sub writer {
my($path,$w) = #_;
_loop $path =>
sub {
open my $fh, ">", $path or die "open $path: $!";
my $continue = $w->($fh);
close $fh or die "close $path: $!";
$continue;
};
}
sub reader {
my($path,$r) = #_;
_loop $path =>
sub {
open my $fh, "<", $path or die "open $path: $!";
my $continue = $r->($fh);
close $fh or die "close $path: $!";
$continue;
};
}
1;
Sample output:
got: 1Upem0iSfY
got: qAALqegWS5
got: 88RayL3XZw
got: NRB7POLdu6
got: IfqC8XeWN6
got: mgeA6sNEpY
got: 2TeiF5sDqy
got: S2ksYEkXsJ
got: zToPYkGPJ5
got: 6VXu6ut1Tq
got: ex0wYvp9Y8
Even though you went to so much trouble, there are still issues. The protocol is unreliable, so reader has no guarantee of seeing all messages that writer sends. With no writer active, reader is content to read the same message over and over.
You could add all this, but a more sensible approach would be using abstractions the operating system provides already.
For example, Unix named pipes seem to be a pretty close match to what you want, and note how simple the code is:
pread
#! /usr/bin/perl
use warnings;
use strict;
my $pipe = "/tmp/mypipe";
system "mknod $pipe p 2>/dev/null";
open my $fh, "<", $pipe or die "$0: open $pipe: $!";
while (<$fh>) {
print "got: $_";
sleep 0;
}
pwrite
#! /usr/bin/perl
use warnings;
use strict;
my $pipe = "/tmp/mypipe";
system "mknod $pipe p 2>/dev/null";
open my $fh, ">", $pipe or die "$0: open $pipe: $!";
my $n = 10;
my #chars = ('a'..'z','A'..'Z','0'..'9','_');
while (1) {
print $fh join("" => map $chars[rand #chars], 1..$n), "\n"
or warn "$0: print: $!";
}
Both ends attempt to create the pipe using mknod because they have no other method of synchronization. At least one will fail, but we don't care as long as the pipe exists.
As you can see, all the waiting machinery is handled by the system, so you do what you care about: reading and writing messages.
This works.
The writer:
use IO::File ();
sub generate_random_string {...}; # same as above
my $file_name = 'file.txt';
my $handle = IO::File->new($file_name, 'a');
die "Could not append to $file_name: $!" unless $handle;
$handle->autoflush(1);
while (1) {
$handle->say(generate_random_string(20));
}
The reader:
use IO::File qw();
my $file_name = 'file.txt';
my $handle = IO::File->new($file_name, 'r');
die "Could not read $file_name: $!" unless $handle;
STDOUT->autoflush(1);
while (defined (my $line = $handle->getline)) {
STDOUT->print($line);
}
are you on windows or *nix? you might be able to string something like this together on *nix by using tail to get the output as it is written to the file. On windows you can call CreateFile() with FILE_SHARE_READ and/or FILE_SHARE_WRITE in order to allow others to access the file while you have it opened for read/write. you may have to periodically check to see if the file size has changed in order to know when to read (i'm not 100% certain here.)
another option is a memory mapped file.