Perl Expect with FreezeThaw - perl

I am converting an interactive command line tool to a web application, with the tool as the backend.
I take in user commands (using AJAX) and call a perl CGI script that extracts the command.
I then use expect to send the command to the process, collect the output and pass that to the resultant html page.
The first command that the user inputs executes fine.
The next commands aren't executed.
I am using FreezeThaw to freeze the expect object after the first request and then to thaw it for the following requests. It freezes fine, but doesn't thaw.
Here's my code:
use strict;
use warnings;
use CGI;
use Expect;
use FreezeThaw qw(freeze thaw);
if ( -e "logFile" ) {
##Log file exists, just run the command after retrieving the object
##Retrieve object here
my ($expectObject) = thaw( $params{'object'} );
if ( $command eq 'exit' ) {
#exit
}
}
else {
print "log NOT exists!!";
##Log file doesn't exist, spawn a new process and loop
my $expectObject = Expect->spawn("command") || die "\nCannot spawn: $!\n";
$expectObject->expect( 15, "prompt>" );
$expectObject->send("$command\r");
$expectObject->expect( 15, "stile>" );
$output = $expectObject->before();
print "<br>$output<br>";
##Persist object here in file
my $serialized = freeze($expectObject);
##Write serialized object to file
die "Serialization Error (write):\n$!" if ( !addParameter( "$workingDir/$folderName", "object", $serialized ) );
}
Any ideas why is it failing..?

If a Perl CGI program ends, it will destroy all spawned process if they does not daemonize itself.
Use mod_perl or other persistent mechanism to keep open a 'shell/command' or execute all commands one by one.

Related

Run multiple SHELL commands within same SHELL

I am working on a legacy and confined system, where I do not have any known way to get anything installed. Now, I need to run multiple SHELL commands on a same shell. For example, the 2nd command should be aware of any changes done to the SHELL environment by the previous command. (like Environment variable manipulation). The system does not have perl expect module available, so I am trying to use IPC::Open3 to open shell and run commands and checking the stdout and stderr for expected strings. This works fine, for only the first function call of runner function and not the 2nd call.
Here is my code, dumb down version:
#!/usr/bin/perl
use IPC::Open3;
use utf8;
#First expect script
my $pid = open3(*IN, *OUT, *ERR, "/bin/bash");
my #output_arr;
my #error_arr;
sub runner {
my ($PID,$CMD,$REGEX) = #_;
print IN "$CMD\n";
close IN;
waitpid $PID, 0;
my $cmd_output = do { local $/; <OUT> };
my $cmd_error = do { local $/; <ERR> };
if ($cmd_output =~ m/$REGEX/ or $cmd_error =~ m/$REGEX/) {
print "$CMD output contains $REGEX\n"
}
else
{
print "$CMD output does not contain $REGEX, command output is $cmd_output ,error is $cmd_error\n"
}
push #output_arr, $cmd_output;
push #error_arr, $cmd_error;
return ($cmd_output, $cmd_error);
}
# This data will come from other progams.
#this works
my ($date_output, $date_error) = runner($pid, "LC_ALL=C date", qr/2023/s);
#this never works
my ($date_output, $date_error) = runner($pid, "LC_ALL=C date", qr/Jan/s);
The output:
LC_ALL=C date output contains (?^s:2023)
LC_ALL=C date output does not contain (?^s:Jan), command output is ,error is

Daemonize a perl script

Currently I am looking to daemonize a perl script. Sadly most answers are out of date and I actually quite do not understand how to begin the daemon process (especially daemon perl scripts).
Right now I am looking at Proc Daemon but again I do not know where to begin or whether it should be done with or without the use of a modules.
I believe if I give an example of what I am look for to give this question a little more direction.
Example
Say I am on osx and I want to write a perl script that can run as a daemon. It responds to the signal HUP which then proceeds to print the contents from a file from a certain directory.If it recieves signal USR1 it prints out the content differently. What is the most appropriate way to do this as a daemon?
This is all you need:
#!/usr/bin/perl
use strict;
use warnings;
use Daemon::Daemonize qw( daemonize write_pidfile );
sub sighup_handler {
...
}
sub sigusr1_handler {
...
}
{
my $name = "...";
my $error_log_qfn = "/var/log/$name.log";
my $pid_file_qfn = "/var/run/$name.pid";
daemonize(
close => 'std',
stderr => $error_log_qfn,
);
$SIG{HUP} = \&sighup_handler;
$SIG{USR1} = \&sigusr1_handler;
write_pidfile($pid_file_qfn);
sleep while 1;
}

Check for file and process it, existence of file launch script - perl

How can I write a Perl program that will wait for an input file to be created and then process the contents of that file? Ideally this should be a cross-platform solution rather than relying on features of a given operating system.
Note that the file might exist as soon as the script is started, or it might only be created at a later date. Once the input file has been processed the Perl program can exit.
[Note that this is a substantially edited version of the original question. The original poster should feel free to make additional edits if I didn't properly understand the question.]
Take a look at File::Monitor. It will inform you of all sorts of file system events. This program shows how to monitor file creation in a specific directory and process all new XML files.
use strict;
use warnings;
use 5.010;
use File::Monitor;
my $monitor = File::Monitor->new;
$monitor->watch( {
name => 'E:\Perl\source',
files => 1,
callback => { files_created => \&files_created }
} );
while () {
my #new = $monitor->scan;
sleep 1;
}
sub files_created {
my ($name, $event, $change) = #_;
for my $file ( $change->files_created ) {
process_xml($file) if $file =~ /\.xml$/i;
}
}
sub process_xml {
my ($file) = #_;
say $file;
}

Open3 outputting to std error only and not asynchronously

I am relatively new to perl programming and I am trying to figure out how open3 works. Here is the code.
#!/usr/bin/perl
use strict;
use warnings;
use IPC::Open3;
my $dir = "/home/vman/Documents/Dev/perl_scripts/Provenance/temp";
my $fileHandle;
my $bashPid;
print "Starting main program\n";
my $pid = fork();
if($pid)#Parent process2
{
print("Start transfer.\n");
$bashPid = $pid;
#Attaching an strace to the executed command which happens in the child process
open3(\*WRITE, \*READ,\*ERROR,"strace", "-f", "-F", "-e", "trace=open,execve","-p", $bashPid, "-s", "2097152","-q");
while(<READ>)
{
print("Here1\n");
print("$_");
}
while(<ERROR>)
{
print("$_");
}
print("Finish transfer.\n");
}
elsif($pid == 0)
{
if (scalar(#ARGV == 0))
{
exit
}
my $args = join(' ', #ARGV);
exec($args);
}
else
{
die("Could not fork.");
}
close(READ);
close(WRITE);
close(ERROR);
waitpid($bashPid, 0);
print "End of main program\n";
I want to run an strace on a bash process, then capture all the output while it is being outputted. Then I will take that output and parse it to see what files are being changed by which process and I will save those changes in a mysql database. For now all I am trying to do is attach an strace onto an existing bash process and get the output of that strace printed within the bash terminal that is running just to make sure that it is asynchronously reading the output.
One of the problems is that I am getting the output through the ERROR filehandle. I am a little confused on to why this is happening. Am I using the correct order for open3 and if there is an error why is the correct output even making it to stderr?
The second problem I have is that I am getting the output only when exec ends which is no good since it needs to be done while exec is running. I thought open3 runs asynchronously.
As per suggested this is what I did and it works perfectly.
#!/usr/bin/perl
use strict;
use warnings;
use IPC::Run3;
my $bashPid;
print "Starting main program\n";
my $pid = fork();
if($pid)#Parent process
{
print("Start transfer.\n");
$bashPid = $pid;
#Attaching an strace to the executed command which happens in the child process
my $command = "strace -fFe trace=open,execve -p $bashPid -s 2097152 -q";
run3($command, \*STDIN, \*STDOUT, \*STDERR);
if ($?)
{
die "something went horribly wrong";
}
while(<STDERR>)
{
print($_);
}
print("Finish transfer.\n");
}
elsif($pid == 0)#cild process
{
if (scalar(#ARGV == 0))
{
exit
}
my $args = join(' ', #ARGV);
exec($args);
}
else
{
die("Could not fork.");
}
close(STDIN);
close(STDOUT);
close(STDERR);
waitpid($bashPid, 0);
print "End of main program\n";
One of the problems is that I am getting the output through the ERROR filehandle
Correct. strace writes to STDERR.
The second problem I have is that I am getting the output only when exec ends which is no good since it needs to be done while exec is running. I thought open3 runs asynchronously.
That's because you only start reading from the child's STDERR after the child closes its STDOUT when it ends.
In fact, you're lucky you haven't deadlocked yet. By reading one at a time as you are currently, doing, you'll deadlock when strace has output enough to fill the pipe.
You need to read from both the child's STDOUT and STDERR as it comes in. You could do this using with the help of select, polling non-blocking handle or threads. None of those options are as simple as ditching open3 and using a higher-level module that handles this for you. The simpler IPC::Run3 and the fully featured IPC::Run are good choices.

how to launch multiple fire and forget PHP scripts with Perl?

I currently have a perl script which I am trying to use to launch three (or more) php scripts each with a set of arguments provided from a database:
$sql = "SELECT id,url,added,lasttotal,lastsnapshot,speed,nextsnapshot FROM urls WHERE DATE(NOW()) > DATE(nextsnapshot) LIMIT 0,3";
$sth = $dbh->prepare($sql);
$sth->execute or print "SQL Error: $DBI::errstr\n";
my ($urlID, $url, $added,$lastTotal,$lastSnapshot,$lastSpeed,$nextsnapshot);
$sth->bind_col(1, \$urlID);
$sth->bind_col(2, \$url);
$sth->bind_col(3, \$added);
$sth->bind_col(4, \$lastTotal);
$sth->bind_col(5, \$lastSnapshot);
$sth->bind_col(6, \$lastSpeed);
$sth->bind_col(7, \$nextsnapshot);
while ($sth->fetch) {
$myexec = "php /usr/www/users/blah/blah/launch_snapshot.php '$url' $urlID '$added' $lastTotal '$lastSnapshot' $lastSpeed".' /dev/null 2>&1 &';
exec ($myexec) or print "\n Couldn't exec $myexec: $!";
}
I don't care about any results from the PHP scripts, I just need to start them all at once, or with a very small delay.
The fetch works properly and returns three unique sets of values. However, it never seems to get past launching the first php script. I don't get any error messages.
Any help would be most appreciated.
You could use fork or just system for that.
Using fork:
foreach($sth->fetch) {
my $pid = fork();
if($pid) { # Parent
waitpid($pid, 0);
} elsif ($pid == 0) { # A child
$myexec = "...";
exec($myexec) or print "\n Couldn't exec $myexec: $!";
exit(0); # Important!
} else {
die "couldn't fork: $!\n";
}
}
Using system:
foreach($sth->fetch) {
$myexec = "...";
system($myexec);
}
From perldoc -f exec
exec LIST
exec PROGRAM LIST
The "exec" function executes a system command and never
returns-- use "system" instead of "exec" if you want it to
return. It fails and returns false only if the command does
not exist and it is executed directly instead of via your
system's command shell (see below).
You want to system (or fork) not exec.