Perl Linux::Inotify2 - can't respond to events anymore - perl

I am getting some really weird behavior when using Linux::Inotify2 module for watching a directory for any newly created files.
I had made a test script to see how it worked, and once that was done, I went on to incorporating its usage in the other scripts, in which it didn't work. Then, when I tried my earlier test script again to find some information, strangely that stopped working as well. It hasn't worked since then. There were no package/distro upgrades during that time.
The problem is that it has stopped responding to events. Here's the test script:
#!/usr/bin/perl
use strict;
use warnings;
use Linux::Inotify2;
my $inotify = new Linux::Inotify2 or die "unable to create new inotify object: $!";
my $dir = "/my/dir";
$inotify->watch($dir, IN_CREATE, sub {
my $e = shift;
print $e->fullname;
}) or die " Can't watch $!";
1 while $inotify->poll;
A strace on the running script kills the script. Otherwise when strace is used when starting the script, then it does seem to read the new events, but there's no response to those events. Any suggestions for debugging this further ?

I had forgotten to set the $|.

Related

How can my Perl script detect when a child process (run with `system`) is killed?

My Perl script
while (blah)
{
system ("wget $blah");
}
does not die when I ctrl-c. Instead the child process wget dies, and the while-loop continues.
How can the parent Perl process detect this and terminate?
Check the return value of system:
0 == system "wget $blah" or die "Can't get $blah.";
I prefer qx (backtracks than system). Here is an example using backtracks. However, you can do the same with system.
use English; ## So i can use $OS_ERROR rather than $!. see perlvar for more info
qx(exit 1); ## anything other than 0 is an error in Linux
print "failed\n" if $OS_ERROR; ## you can die here
However, wget may not always fail. you may get served a 404 page, which will not be picked up as a failure.
use Mojo::UserAgent; ## use Mojo
my $url = "http://madeup.com/hdhd";
my $res = Mojo::UserAgent->new->get($url)->result;
unless ($res->code==200) {
print "ERROR\n".$res->message."\n"; ## you can use die here
}
Same As above using HTTP::Tiny
use HTTP::Tiny; ## use http tiny
my $url = "https://makeup.com/hdhd";
my $res = HTTP::Tiny->new->get($url);
unless ($res->{success}) {
print "ERROR\n".$res->{status}."\n"; ## you can die here
}
I'm inclined to do my own fork and manage the process more closely if I want to actually think about the child. Set up various signal handlers, exit with appopriate values, and so on. That gives you fine-grained control over what's happening. That forked process might even exec to become wget. And, since it's a different process, you can send signals to it directly.
But, if wget is the command you want to use, I'd wonder why you weren't using a module to do that work. You'd get a lot more control when you have access to the request and response info.

How do I run shell commands in a CGI program as the nobody user?

I want to run shell commands in a CGI program (written in Perl). My program doesn’t have root permission. It runs as nobody. I want to use this code:
use strict;
system <<'EEE';
awk '{a[$1]+=$2;b[$1]+=$3}END{for(i in a)print i, a[i], b[i]|"sort -nk 3"}' s.txt
EEE
I can run my code successfully with perl from the command line but not as a CGI program.
Based on the code in your question, there are at least four possibilities for failure.
The nobody user does not have permission to execute your program.
The Perl code in your question has no shebang (#!) line. You are trying to run awk, so I assume you are running on some form of Unix. If your code is missing this line, then your operating system does not know how to run your program.
The file s.txt is either not in the executing program’s working directory, or it is not readable by the nobody user.
For whatever reason, awk is not reachable via the PATH of your executing program’s environment.
To quickly diagnose such low-level problems, try to have all error output to show up in the browser. One way to do this is adding the following just after the shebang line in your code.
BEGIN {
print "Content-type: text/plain\n\n";
open STDERR, ">&", \*STDOUT or print "$0: dup: $!";
}
The output will render as plain text rather than HTML, but this is a temporary measure to see your program’s output. By wrapping it in a BEGIN block, the code executes as soon as it parses. Redirecting STDERR means your browser also gets anything written to the standard output.
Another way to do this is with the CGI::Carp module.
use CGI::Carp 'fatalsToBrowser';
This way, errors go to the browser and also to the web server’s error log.
If you still see 500-series errors from your server, the problem is happening at a lower level: probably some failure to start perl. Go examine your server’s error log. Once your program is executing, you can remove this temporary redirection of error output.
Finally, I recommend changing your program to
#! /usr/bin/perl -T
BEGIN { print "Content-type: text/plain\n\n"; }
use strict;
use warnings;
$ENV{PATH} = "/bin:/usr/bin";
my $input = "/path/to/your/s.txt";
my $buckets = <<'EOProgram'
{ a[$1] += $2; b[$1] += $3 }
END { for (i in a) print i, a[i], b[i] }
EOProgram
open STDIN, "-|", "awk", $buckets, $input or die "$0: open: $!";
exec "sort", "-nk", 3 or die "$0: exec: $!";
The -T switch enables a security dataflow analysis called taint mode that prevents you from using unsanitized input on system operations such as open, exec, and so on that an attacker (or benign user supplying unexpected input) could use to harm your system. You should always add -T to CGI programs and any other code that runs on behalf of another user.
Given the nature of your awk program, a content type of text/plain seems reasonable. Output it as soon as possible.
With taint mode enabled, be explicit about the value of your PATH environment variable. If instead you stick with whatever untrusted PATH your program inherits, attempting to run external programs will fail.
Nail down the full path of your input. This will eliminate surprises.
Using the multi-argument forms of open and exec eliminates the shell and its argument parsing. (For completeness, system also has a similar multi-argument form.) Yes, writing it this way can mean being a little more deliberate (such as breaking out the arguments and setting up the pipeline yourself), but it also avoids nasty surprises.
I'm sure nobody is allowed to run shell commands. The problem is that nobody doesn't have permission to open the file s.txt. Add read permission for everyone to s.txt, and add execute permission to everyone on every directory up to s.txt.
I would suggest finding out the full qualified path for awk and specifying it directly. Likely the nobody that launched httpd had a very minimal path in its $ENV{PATH}. Displaying the $ENV{PATH} I am guessing will show this.
This is a good thing, I wouldn't modify the path, but just specify the path /usr/bin/awk or what not.
If you have shell access and it works, type 'which awk' to find this out.
i can run my codes successfully in
perl file but not in cgi file.
What web server are you running under? For instance, apache requires printing a CGI header i.e. print "Content-type: text/plain; charset=utf-8\n\n", or
use CGI;
my $q = CGI->new();
print $q->header('text/html');
(See CGI)
Apache will conplain in the log (error.log) about "premature end of script headers" IF what I said is the case.
You could just do it inline without having to fork out to another process...
if ( open my $fh, '<', 's.txt' ) {
my %data;
while (<$fh>) {
my ($c1,$c2,$c3) = split;
$data{a}{$c1} += $c2;
$data{b}{$c1} += $c3;
}
foreach ( sort { $data{b}{$a} <=> $data{b}{$b} } keys %{ $data{b} } ) {
print "$_ $data{a}{$_} $data{b}{$_}\n";
}
} else {
warn "Unable to open s.txt: $!\n";
}

How do I avoid the need to wait on and close some software driven from Perl?

I have a folder full of script files. When I run them, a program to which they are native is opened, it does some stuff and a CSV file is generated. I wrote some code that I want to run each script file and produce a bunch of CSV files, one for each script.
What happens is the following: when my Perl application is executed, the software is launched and the first of the scripts is run successfully (a CSV file is created). However, at this step the Perl application waits for me to close the software before I continue. It does this for every script. What can I do to avoid this from happening?
use strict;
use warnings;
use Cwd;
my $dir = cwd();
opendir(DIR, $dir);
my #files= grep(/\.acs$/,readdir(DIR));
$dir=~s/\//\\/g;
chdir $dir;
foreach (#files)
{
print "$_\n";
system ("$_")
}
I think you'll want to fork and exec and waitpid - i.e. set up and run your own process and wait for it to finish on your own.
http://larc.ee.nthu.edu.tw/~cthuang/courses/ee2320/12_process.html
This isn't easy, but unfortunately, you're doing this on an OS that isn't script-friendly. Doing this under Linux or OS X you wouldn't have any troubles like this.
You'll need to figure out if these commands are even available under Windows. You may have to find some similar things that are available under windows if there is no posix compatibility library.
your best choice is to ask the application nicely to close itself after it is done.
for example, the cmd.exe command have parameter /C that does exactly that.
try to run the application with /?, and see if anything useful comes up.
failing that, you can use Win32::Process to create the process and then kill it after you are sure it is done. see the documentation for that.

How does Perl interact with the scripts it is running?

I have a Perl script that runs a different utility (called Radmind, for those interested) that has the capability to edit the filesystem. The Perl script monitors output from this process, so it would be running throughout this whole situation.
What would happen if the utility being run by the script tried to edit the script file itself, that is, replace it with a newer version? Does Perl load the script and any linked libraries at the start of its execution and then ignore the script file itself unless told specifically to mess with it? Or perhaps, would all hell break loose, and executions might or might not fail depending on how the new file differed from the one being run?
Or maybe something else entirely? Apologies if this belongs on SuperUser—seems like a gray area to me.
It's not quite as simple as pavel's answer states, because Perl doesn't actually have a clean division of "first you compile the source, then you run the compiled code"[1], but the basic point stands: Each source file is read from disk in its entirety before any code in that file is compiled or executed and any subsequent changes to the source file will have no effect on the running program unless you specifically instruct perl to re-load the file and execute the new version's code[2].
[1] BEGIN blocks will run code during compilation, while commands such as eval and require will compile additional code at run-time
[2] Most likely by using eval or do, since require and use check whether the file has been loaded already and ignore it if it has.
For a fun demonstration, consider
#! /usr/bin/perl
die "$0: where am I?\n" unless -e $0;
unlink $0 or die "$0: unlink $0: $!\n";
print "$0: deleted!\n";
for (1 .. 5) {
sleep 1;
print "$0: still running!\n";
}
Sample run:
$ ./prog.pl
./prog.pl: deleted!
./prog.pl: still running!
./prog.pl: still running!
./prog.pl: still running!
./prog.pl: still running!
./prog.pl: still running!
Your Perl script will be compiled first, then run; so changing your script while it runs won't change the running compiled code.
Consider this example:
#!/usr/bin/perl
use strict;
use warnings;
push #ARGV, $0;
$^I = '';
my $foo = 42;
my $bar = 56;
my %switch = (
foo => 'bar',
bar => 'foo',
);
while (<ARGV>) {
s/my \$(foo|bar)/my \$$switch{$1}/;
print;
}
print "\$foo: $foo, \$bar: $bar\n";
and watch the result when run multiple times.
The script file is read once into memory. You can edit the file from another utility after that -- or from the Perl script itself -- if you wish.
As the others said, the script is read into memory, compiled and run. GBacon shows that you can delete the file and it will run. This code below shows that you can change the file and do it and get the new behavior.
use strict;
use warnings;
use English qw<$PROGRAM_NAME>;
open my $ph, '>', $PROGRAM_NAME;
print $ph q[print "!!!!!!\n";];
close $ph;
do $PROGRAM_NAME;
... DON'T DO THIS!!!
Perl scripts are simple text files that are read into memory, compiled in memory, and the text file script is not read again. (Exceptions are modules that come into lexical scope after compilation and do and eval statements in some cases...)
There is a well known utility that exploits this behavior. Look at CPAN and its many versions which is probably in your /usr/bin directory. There is a CPAN version for each version of Perl on your system. CPAN will sense when a new version of CPAN itself is available, ask if you want to install it, and if you say "y" it will download the newer version and respawn itself right where you left off without loosing any data.
The logic of this is not hard to follow. Read /usr/bin/CPAN and then follow the individualized versions related to what $Config::Config{version} would generate on your system.
Cheers.

A 'do' statement at the end of my perl script never runs

In my main script, I am doing some archive manipulation. Once I have completed that, I want to run a separate script to upload my archives to and FTP server.
Separately, these scripts work well. I want to add the FTP script to the end of my archive script so I only need to worry about scheduling one script to run and I want to guarantee that the first script completes it work before the FTP script is called.
After looking at all the different methods to call my FTP script, I settled on 'do', however, when my do statement is at the end of the script, it never runs. When I place it in my main foreach loop, it runs fine, but it runs multiple times which I want to avoid since the FTP script can handle having multiple archives to upload.
Is there something I am missing? Why does it not run?
Here is the relivant code:
chdir $input_dir;
#folder_list = <*>;
foreach $file (#folder_list)
{
if($file =~ m/.*zip/)
{
print "found $file\n";
print "Processing Files...\n";
mkdir 'BuildDir';
$new_archive = Archive::Zip->new();
$archive_name = $file;
$zip = Archive::Zip->new($file);
$zip->extractTree('', $build_dir);
&Process_Files;
}
}
do 'ArchiveToFTPServer.pl';
print "sending files to FTP server";
Thanks
I ended up copying and pasting the FTP code into the main file as a sub. It works fine when I call it at the end of the foreach loop.
Check out the docs for the do 'function'.
In there, you'll find a code sample:
unless ($return = do $file) {
warn "couldn't parse $file: $#" if $#;
warn "couldn't do $file: $!" unless defined $return;
warn "couldn't run $file" unless $return;
}
I suggest putting this code in to find out what's happening with your do call. In addition, try adding warnings and strict to your code to weed out any subtle bugs.
Add these lines to your scripts:
use strict;
use warnings;
You will now get more diagnostic information, which should lead you to the solution. My current bet is that you are not specifying the correct path to the other script, or that it is missing a shebang line.
What's the call to the new script? If using a shell, did you check your environment variables?