Executing a multiple indefinite perl script within another perl script - perl

My intention is to execute long.pl perl script with different path as an argument and since long.pl has indefinite loop such that in the main script it does not come to second path. I thought to use fork for doing it, but I'm not sure whether it will solve my problem or not!
Some information on the method of achieving the task would be helpful, and please let me know if you need any clarification on the problem statement.
#!/usr/bin/perl
use strict;
use warnings;
print localtime () . ": Hello from the parent ($$)!\n";
my #paths = ('C:\Users\goudarsh\Desktop\Perl_test_scripts','C:\Users\goudarsh\Desktop\Perl_test_scripts/rtl2gds');
foreach my $path(#paths){
my $pid = fork;
die "Fork failed: $!" unless defined $pid;
unless ($pid) {
print localtime () . ": Hello from the child ($$)!\n";
exec "long.pl $path"; # Some long running process.
die "Exec failed: $!\n";
}
}
long.pl
#!/usr/bin/perl
use strict;
use warnings;
while(1){
sleep 3;
#do some stuff here
}

Example run:
$ perl my_forker.pl
Done with other process.
Done with long running process.
Done with main process.
The following files must have executable permissions set:
long_running.pl:
#!/usr/bin/env perl
use strict;
use warnings;
use 5.020;
sleep 5;
say 'Done with long running process.';
other_process.pl:
#!/usr/bin/env perl
use strict;
use warnings;
use 5.020;
sleep 3;
say "Done with other process."
my_forker.pl:
use strict;
use warnings;
use 5.020;
my #paths = (
'./long_running.pl',
'./other_process.pl',
);
my #pids;
for my $cmd (#paths) {
defined (my $pid = fork()) or die "Couldn't fork: $!";
if ($pid == 0) { #then in child process
exec $cmd;
die "Couldn't exec: $!"; #this line will cease to exist if exec() succeeds
}
else { #then in parent process, where $pid is the pid of the child
push #pids, $pid;
}
}
for my $pid (#pids) {
waitpid($pid, 0) #0 => block
}
say "Done with main process.";

Related

Trouble with IPC::Open3

I am writing a simple script using IPC::Open3. The script produces no output to either stdout or stderr, while I would expect output to both.
The complete source code:
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
use IPC::Open3;
pipe my $input, my $output or die $!;
my $pid = open3(\*STDIN, $output, \*STDERR, 'dd', 'if=/usr/include/unistd.h') or die $!;
while(<$input>) {
print $_."\n";
}
waitpid $pid, 0;
I am fairly certain that I am using IPC::Open3 incorrectly. However, I am still confused as to what I should be doing.
It's the pipe. Without knowing why it's there I can't say more. This works fine.
my $reader;
my $pid = open3(\*STDIN, $reader, \*STDERR, 'dd', 'if=/usr/include/unistd.h') or die $!;
while(<$reader>) {
print $_."\n";
}
waitpid $pid, 0;
I realize it's probably just an example, but in case it's not... this is complete overkill for what you're doing. You can accomplish that with backticks.
print `dd if=/usr/include/unistd.h`
IPC::Open3 is a bit overcomplicated. There are better modules such as IPC::Run and IPC::Run3.
use strict;
use warnings;
use IPC::Run3;
run3(['perl', '-e', 'warn "Warning!"; print "Normal\n";'],
\*STDIN, \*STDOUT, \*STDERR
);
Your program suffers from the following problems:
\*STDIN (open STDIN as a pipe tied to the child's STDIN) should be <&STDIN (use the parent's STDIN as the child's STDIN).
\*STDERR (open STDERR as a pipe tied to the child's STDERR) should be >&STDERR (use the parent's STDERR as the child's STDERR).
The value you place in $output is being ignored and overwritten. Fortunately, it's being overwritten with a correct value!
You use print $_."\n";, but $_ is already newline-terminated. Either chomp first, or don't add a newline.
open3 isn't a system call, so it doesn't set $!.
open3 doesn't return false on error; it throws an exception.
So we get something like:
#!/usr/bin/env perl
use strict;
use warnings;
use feature qw( say );
use IPC::Open3;
my $pid = open3('<&STDIN', my $output, '>&STDERR',
'dd', 'if=/usr/include/unistd.h');
while (<$output>) {
chomp;
say "<$_>";
}
waitpid($pid, 0);

Perl - custom keystroke handlers

I'm trying to implement custom handlers for given keystrokes so that I can change mode when my script is fetching data from file. How is that possible without any WHILE loop?
I was looking into Term::ReadKey but I dont think it does what I need. Maybe I should connect it with something though I can't find any solution on google.
I've just started with perl scripting :)
Here is an example of how to avoid busy waiting when waiting for a keyboard input:
use strict;
use warnings;
use IPC::Open2;
my $pid1 = run_cmd('read_key');
my $pid2 = run_cmd('counter');
print "Master: waiting for keyboard event..\n";
waitpid $pid1, 0;
print "Master: Done.\n";
kill 'TERM', $pid2;
sub run_cmd {
my ($cmd) = #_;
open(OUT, ">&STDOUT") or die "Could not duplicate STDOUT: $!\n";
open(IN, ">&STDIN") or die "Could not duplicate STDIN: $!\n";
my $pid = open2('>&OUT', '<&IN', $cmd);
return $pid;
}
where read_key is:
use strict;
use warnings;
use Term::ReadKey;
ReadMode 4;
END { ReadMode 0 }
my $key = ReadKey(0);
print "$key\n";
and counter is:
use strict;
use warnings;
$SIG{TERM} = sub { die "Child (counter): Caught a sigterm. Abort.\n" };
my $i = 0;
while (++$i) {
sleep 1;
print "$i\n";
}
Example output:
Name "main::IN" used only once: possible typo at ./p.pl line 19.
Name "main::OUT" used only once: possible typo at ./p.pl line 18.
Master: waiting for keyboard event..
1
2
3
q
Master: Done.
Child (counter): Caught a sigterm. Abort.

perl background process

I am trying to run a background process in perl. I create a child process, which is used to call another perl script. I want to run few lines of code parallely with this child process. And after the child process is done.I want to print a line of code.
Main script
#!/usr/bin/perl
$|=1;
print "before the child process\n";
my $pid = fork();
if (defined $pid)
{
system("perl testing.pl");
}
print "before wait command\n";
wait();
print "after 20 secs of waiting\n";
testing.pl
#!/usr/bin/perl
print "inside testing\n";
sleep(20);
Expected output
before the child process
before wait command
(should wait for 20 secs and then print)
after 20 secs of waiting
There are many problems with your script. Always:
use strict;
use warnings;
localising special variables is a good practice. Only a variable containing the special value undef returns false for defined. So, every other value (even a 0; which is the case here) returns true for defined. In the other script, the shebang is wrong.
#!/usr/bin/perl
use strict;
use warnings;
local $| = 1;
print "Before the child process\n";
unless (fork) {
system("perl testing.pl");
exit;
}
print "Before wait command\n";
wait;
print "After 20 secs of waiting\n";
The “Background Processes” section of the perlipc documentation reads
You can run a command in the background with:
system("cmd &");
The command’s STDOUT and STDERR (and possibly STDIN, depending on your shell) will be the same as the parent’s. You won't need to catch SIGCHLD because of the double-fork taking place; see below for details.
Adding an ampersand to the argument to system in your program can vastly simplify your main program.
#! /usr/bin/env perl
print "before the child process\n";
system("perl testing.pl &") == 0
or die "$0: perl exited " . ($? >> 8);
print "before wait command\n";
wait;
die "$0: wait: $!" if $? == -1;
print "after 20 secs of waiting\n";
fork return value handling is a bit tricky, indeed.
Recent article by Aristotle features a nice and concise forking idiom, which, in your case, looks like:
#!/usr/bin/env perl
use 5.010000;
use strict;
use warnings qw(all);
say 'before the child process';
given (fork) {
when (undef) { die "couldn't fork: $!" }
when (0) {
exec $^X => 'testing.pl';
} default {
my $pid = $_;
say 'before wait command';
waitpid $pid, 0;
say 'after 20 secs of waiting';
}
}
Pay attention to exec $^X => '...' line: the $^X variable holds the full path to the current Perl executable, so the "right Perl version" will be guaranteed. Also, system call is pointless when you're pre-forking.

How to read to and write from a pipe in Perl?

I need to modify an existing Perl program. I want to pipe a string (which can contain multiple lines) through an external program and read the output from this program. This external program is used to modify the string. Let's simply use cat as a filter program. I tried it like this, but it doesn't work. (Output of cat goes to STDOUT instead of being read by perl.)
#!/usr/bin/perl
open(MESSAGE, "| cat |") or die("cat failed\n");
print MESSAGE "Line 1\nLine 2\n";
my $message = "";
while (<MESSAGE>)
{
$message .= $_;
}
close(MESSAGE);
print "This is the message: $message\n";
I've read that this isn't supported by Perl because it may end up in a deadlock, and I can understand it. But how do I do it then?
You can use IPC::Open3 to achieve bi-directional communication with child.
use strict;
use IPC::Open3;
my $pid = open3(\*CHLD_IN, \*CHLD_OUT, \*CHLD_ERR, 'cat')
or die "open3() failed $!";
my $r;
for(my $i=1;$i<10;$i++) {
print CHLD_IN "$i\n";
$r = <CHLD_OUT>;
print "Got $r from child\n";
}
This involves system programming, so it’s more than a basic question. As written, your main program doesn’t require full-duplex interaction with the external program. Dataflow travels in one direction, namely
string → external program → main program
Creating this pipeline is straightforward. Perl’s open has a useful mode explained in the “Safe pipe opens” section of the perlipc documentation.
Another interesting approach to interprocess communication is making your single program go multiprocess and communicate between—or even amongst—yourselves. The open function will accept a file argument of either "-|" or "|-" to do a very interesting thing: it forks a child connected to the filehandle you’ve opened. The child is running the same program as the parent. This is useful for safely opening a file when running under an assumed UID or GID, for example. If you open a pipe to minus, you can write to the filehandle you opened and your kid will find it in his STDIN. If you open a pipe from minus, you can read from the filehandle you opened whatever your kid writes to his STDOUT.
This is an open that involves a pipe, which gives nuance to the return value. The perlfunc documentation on open explains.
If you open a pipe on the command - (that is, specify either |- or -| with the one- or two-argument forms of open), an implicit fork is done, so open returns twice: in the parent process it returns the pid of the child process, and in the child process it returns (a defined) 0. Use defined($pid) or // to determine whether the open was successful.
To create the scaffolding, we work in right-to-left order using open to fork a new process at each step.
Your main program is already running.
Next, fork a process that will eventually become the external program.
Inside the process from step 2
First fork the string-printing process so as to make its output arrive on our STDIN.
Then exec the external program to perform its transformation.
Have the string-printer do its work and then exit, which kicks up to the next level.
Back in the main program, read the transformed result.
With all of that set up, all you have to do is implant your suggestion at the bottom, Mr. Cobb.
#! /usr/bin/env perl
use 5.10.0; # for defined-or and given/when
use strict;
use warnings;
my #transform = qw( tr [A-Za-z] [N-ZA-Mn-za-m] ); # rot13
my #inception = (
"V xabj, Qnq. Lbh jrer qvfnccbvagrq gung V pbhyqa'g or lbh.",
"V jnf qvfnccbvagrq gung lbh gevrq.",
);
sub snow_fortress { print map "$_\n", #inception }
sub hotel {
given (open(STDIN, "-|") // die "$0: fork: $!") { # / StackOverflow hiliter
snow_fortress when 0;
exec #transform or die "$0: exec: $!";
}
}
given (open(my $fh, "-|") // die "$0: fork: $!") {
hotel when 0;
print while <$fh>;
close $fh or warn "$0: close: $!";
}
Thanks for the opportunity to write such a fun program!
You can use the -n commandline switch to effectively wrap your existing program code in a while-loop... look at the man page for -n:
LINE:
while (<>) {
... # your program goes here
}
Then you can use the operating system's pipe mechanism directly
cat file | your_perl_prog.pl
(Edit)
I'll try to explain this more carefully...
The question is not clear about what part the perl program plays: filter or final stage. This works in either case, so I will assume it is the latter.
'your_perl_prog.pl' is your existing code. I'll call your filter program 'filter'.
Modify your_perl_prog.pl so that the shebang line has an added '-n' switch: #!/usr/bin/perl -n or #!/bin/env "perl -n"
This effectively puts a while(<>){} loop around the code in your_perl_prog.pl
add a BEGIN block to print the header:
BEGIN {print "HEADER LINE\n");}
You can read each line with '$line = <>;' and process/print
Then invoke the lot with
cat sourcefile |filter|your_perl_prog.pl
I want to expand on #Greg Bacon's answer without changing it.
I had to execute something similar, but wanted to code without
the given/when commands, and also found there was explicit exit()
calls missing because in the sample code it fell through and exited.
I also had to make it also work on a version running ActiveState perl,
but that version of perl does not work.
See this question How to read to and write from a pipe in perl with ActiveState Perl?
#! /usr/bin/env perl
use strict;
use warnings;
my $isActiveStatePerl = defined(&Win32::BuildNumber);
sub pipeFromFork
{
return open($_[0], "-|") if (!$isActiveStatePerl);
die "active state perl cannot cope with dup file handles after fork";
pipe $_[0], my $child or die "cannot create pipe";
my $pid = fork();
die "fork failed: $!" unless defined $pid;
if ($pid) { # parent
close $child;
} else { # child
open(STDOUT, ">&=", $child) or die "cannot clone child to STDOUT";
close $_[0];
}
return $pid;
}
my #transform = qw( tr [A-Za-z] [N-ZA-Mn-za-m] ); # rot13
my #inception = (
"V xabj, Qnq. Lbh jrer qvfnccbvagrq gung V pbhyqa'g or lbh.",
"V jnf qvfnccbvagrq gung lbh gevrq.",
);
sub snow_fortress { print map "$_\n", #inception }
sub hotel
{
my $fh;
my $pid = pipeFromFork($fh); # my $pid = open STDIN, "-|";
defined($pid) or die "$0: fork: $!";
if (0 == $pid) {
snow_fortress;
exit(0);
}
open(STDIN, "<&", $fh) or die "cannot clone to STDIN";
exec #transform or die "$0: exec: $!";
}
my $fh;
my $pid = pipeFromFork($fh); # my $pid = open my $fh, "-|";
defined($pid) or die "$0: fork: $!";
if (0 == $pid) {
hotel;
exit(0);
}
print while <$fh>;
close $fh or warn "$0: close: $!";
the simplest -- not involving all these cool internals -- way to do what the OP needs, is to use a temporary file to hold the output until the external processor is done, like so:
open ToTemp, "|/usr/bin/tac>/tmp/MyTmp$$.whee" or die "open the tool: $!";
print ToTemp $TheMessageWhateverItIs;
close ToTemp;
my $Result = `cat /tmp/MyTmp$$.whee`; # or open and read it, or use File::Slurp, etc
unlink "/tmp/MyTmp$$.whee";
Of course, this isn't going to work for something interactive, but co-routines appear to be out of the scope of the original question.

Perl: Why does this create thousdands of child processes?

So when I run this code it seems to fork bomb the system can you guys help me out? All I want to do is start a thread for each one of the appWatch domains and enviroments.
#!/usr/bin/perl
#
#
# Starts the mass processes to watch each directory & enviroment.
#
#
#
###################################################################################
use strict;
use warnings;
use POSIX 'setsid';
setsid();
my #domains = (qw(austin batman luke heman drevil joker skeltor drevil goodguy badguy));
my #envs = (qw(qa dev));
foreach my $env (#envs){
foreach my $guy (#domains){
unless(my $pid = fork()){
system("echo $env.$guy");
system("sleep 10 ");
#system("./appWatch -d $guy -e $env");
open PID, ">>pid.lock";
print PID $$ . "\n";
print "$$ is Parent, $pid is child";
}
}
}
wait();
Your code should only create three children. If you are seeing a bunch of children being created then you are running different code (or the culprit is appWatch not your code). On a slightly unrelated note, there are a couple things you should probably be doing differently:
fork has three possible return values, not two
you must reap your children or set the system up to reap them for you
you should use exec instead of system if you don't want to return to the code
you should use the multiple argument version of system and exec instead of the one argument version if you don't want the shell to do stuff with the arguments.
Here is my version of your code:
$SIG{CHLD} = "IGNORE"; #auto-reap the children
my #domains = qw(domains);
my #envs = qw(enviromentA enviromentB);
for my $env (#envs){
for my $guy (#domains){
die "could not fork: $!" unless defined(my $pid = fork);
next if $pid;
exec "./appWatch", "-d", $guy, "-e", $env;
die "exec must have failed";
}
}
You updated version of the code shows what happened. Your child does not exit. Here is how I would write your code:
#!/usr/bin/perl
# Starts the mass processes to watch each directory & enviroment.
use strict;
use warnings;
use POSIX 'setsid';
setsid();
my #domains = qw(
austin batman luke heman
drevil joker skeltor drevil
goodguy badguy
);
my #envs = qw(qa dev);
my #pids;
for my $env (#envs){
for my $guy (#domains){
die "could not fork: $!" unless defined(my $pid = fork);
if ($pid) {
push #pids, $pid;
next;
}
print "$env.$guy\n";
sleep 10; #FIXME: I don't know if you really need this
#exec will replace the child process with appWatch
exec "./appWatch", "-d", $guy, "-e", $env;
die "exec failed for some reason";
}
}
for my $pid (#pids) {
waitpid $pid, 0;
}
With
$ cat appWatch
#! /usr/bin/perl -l
print "[", join("][" => #ARGV), "]";
running on
$ uname -a
Linux mybox 2.6.32-24-generic #39-Ubuntu SMP Wed Jul 28 05:14:15 UTC 2010 x86_64 GNU/Linux
I get no fork bomb, just an unexciting Cartesian product:
$ ./prog.pl
[-d][domains][-e][enviromentA]
[-d][domains][-e][enviromentB]