I have some scripts that I have started unit-testing using the "modulino" idea. I have encountered a problem in that when the script is called with "perl -d" the script does not run as caller() returns a true value.
I have the main body of the script wrapped in a main() and some subroutines being slowly pulled out of the main() into their own subroutines.
At the top of the script I have:
main(#ARGS) unless caller();
When called in .t tests it works as I want, not running main() so I can test the subroutines. When I call the script from CLI it works great calling main().
The problem occurs when I call it from the CLI with:
perl -d myscript.pl
At this stage caller returns a valid value (rather than undef) and main is not called.
Suggestions would be much appreciated about how to approach this one.
The situation with -d switch is similar as with testing - your code is executed by something else, in this case the debugger.
You can either run main yourself by calling it in the debugger manually or you have to detect if caller is debugger. Something like:
main(#ARGS) if !caller() || (caller)[0] eq 'DB';
Related
How can I call a script with that preceding argument before the script path using Task Scheduler or a batch file? plackup E:\Mojolicious_server.pl
So I have multiple Mojolicious applications.
I have bundled them all into a psgi server using plack.
My plack script looks like this...
use Plack::Builder;
use lib 'push_the_button/lib';
use lib 'Phone_Book/lib';
use Mojo::Server::PSGI;
use Plack::Session::Store;
use Data::Dumper;
use File::Basename;
my $current_directory = dirname(__FILE__);
my $push_the_button, $phone_book;
{
my $server_1 = Mojo::Server::PSGI->new;
$server_1->load_app($current_directory.'/Phone_Book/script/application');
$phone_book = sub { $server_1->run(#_) }
}
{
my $server_2 = Mojo::Server::PSGI->new;
$server_2->load_app($current_directory.'/push_the_button/script/push_the_button.pl');
$push_the_button = sub { $server_2->run(#_) }
}
builder {
mount "/phone_book" => builder {$phone_book};
mount "/push_the_button" => builder {$push_the_button};
};
Now I want to run this as a scheduled task at system startup.
to run this script normally, I would go to cmd
pushd c:\strawberry\perl\bin
Then I would run this command
plackup E:/Mojolicious_Server.pl
My issue seems to be that plackup portion.
I have tried adding plackup E:/Mojolicious_Server.pl to the Arguments portion. I have also tried adding plackup to the arguments portion and E:/Mojolicious_Server.pl in the start in portion. Of course C:\strawberry\perl\bin\perl.exe is the Program to start.
Once I tried all the variations I could think of (including variations on double and single quotes), I wrote a very simple batch file to run (even just in the terminal for testing). It looks like this.
#echo off
call "C:\Strawberry\perl\bin\perl.exe" "plackup E:\Mojolicious_Server.pl"
Which says "Can't open perl script".
I went ahead and tried another route using another perl script to execute my command...
#! C:\strawberry\perl\bin\perl.exe
`plackup E:\\Mojolicious_Server.pl`;
This at least completes successfully in the Task Scheduler, doesn't actually do anything though...
Just as a last resort (obviously wouldn't work), I added plackup E:\\Mojolicious_Server.pl; at the end of my Mojolicious_Server.pl script and run that script. Didn't work as I expected (calling a script thats already running).
This seems like it should be very easy, I'm sorry if I'm missing something simple. Any notion in the right direction would be appreciated.
Also I am only doing it this way because I am strictly on a windows environment. If there is a better way, again, please just a nudge in the right direction.
As stated in the comments of the question, all it takes is to invoke the sequence plackup E:/Mojolicious_Server.pl making sure that you add the full path to the plackup script. In the case of #gregnnylf94, it was:
c:\strawberry\perl\site\bin\plackup E:\Mojolicious_server.pl
This is so because cron jobs do not have the same context as shell ones. The most frequent problem comes from the PATH variable that is key to finding what you want to execute.
This is true in Windows and Linux systems alike.
I'm running into problems testing a new addition to a module. (Specifically - the ~ operator seems to be not working in Math::Complex for this new feature only.) It's too bizarre to be what it appears but the ideal scheme would be to add the -d option on the top line of the .t program.
Well, I was quickly disabused of that idea! It does not invoke the debugger.
If I wanted to use the debugger, I'd need to create an edit of the .t program that:
Uses (the use command) the module directly. not in the form of
BEGIN { use_ok('My::Module') };
Does not "use Test::More;"
A few other edits that cause gluteal pains
The problem with doing that is that any changes I make in the edited test program I still need to transfer back to the true test program use in "make test". Error prone as best.
I am already using "make test TEST_VERBOSE=1" so that my stdio output shows up. But there's GOT to be a simpler way to invoke the debugger on the .t
Thanks for ideas here.
-- JS
use_ok tests are great, but you should have them in test files of their own, not test files that also test other things.
I'm not sure why you would need to avoid Test::More or use_ok to run the debugger, though. What does happen when you try your test directly:
perl -d -Mblib t/yourtestfile.t?
If all else fails, you can try using Enbugger in your test script.
I have a small perl script which runs the /scripts/pkgacct command in cPanel using system(). The code looks like so;
print "\n/scripts/pkgacct --skiphomedir --nocompress $acc_name /my_backup\n\n";
system("/scripts/pkgacct --skiphomedir --nocompress $acc_name /my_backup");
my $bk_path = "/my_backup/cpmove-$acc_name.tar";
system("tar -xvf $bk_path -C /my_backup/");
When I run the script, only cPanel's default roundcube and horde databases are backed up. When I replace system() with exec"", the script runs as expected but terminates as soon as exec is executed, i.e the subsequent statements in the perl script aren't executed. Using backticks shows the same behaviour as system() - i.e doesn't backup all the databases.
Could someone tell me what mistake I am making?
Alternately, how can I get the remaining statements to execute after the exec command?
Try using system like so:
system('/scripts/pkgacct', '--skiphomedir', '--nocompress', $acc_name, '/my_backup');
I've found that system works best when you break up the command and parameters like it expects.
Try using IPC::Run (https://metacpan.org/pod/IPC::Run). Your code would look something like:
use IPC::Run qw(run);
print "\n/scripts/pkgacct --skiphomedir --nocompress $acc_name /my_backup\n\n";
run ['/scripts/pkgacct', '--skiphomedir', '--nocompress', $acc_name];
my $bk_path = "/my_backup/cpmove-$acc_name.tar";
run ['tar','-xvf',$bk_path,'-C','/my_backup/'];
Note that I'm aware that this is probably not the best or most optimal way to do this but I've run into this somewhere before and I'm curious as to the answer.
I have a perl script that is called from an init that runs and occasionally dies. To quickly debug this, I put together a quick wrapper perl script that basically consists of
#$path set from library call.
while(1){
system("$path/command.pl " . join(" ",#ARGV) . " >>/var/log/outlog 2>&1");
sleep 30; #Added this one later. See below...
}
Fire this up from the command line and it runs fine and as expected. command.pl is called and the script basically halts there until the child process dies then goes around again.
However, when called from a start script (actually via start-stop-daemon), the system command returns immediately, leaving command.pl running. Then it goes around for another go. And again and again. (This was not fun without the sleep command.). ps reveals the parent of (the many) command.pl to be 1 rather than the id of the wrapper script (which it is when I run from the command line).
Anyone know what's occurring?
Maybe the command.pl is not being run successfully. Maybe the file doesn't have execute permission (do you need to say perl command.pl?). Maybe you are running the command from a different directory than you thought, and the command.pl file isn't found.
There are at least three things you can check:
standard error output of your command. For now you are swallowing it by saying 2>&1. Remove that part and observe what errors the system command produces.
the return value of system. The command may run and system may still return an exit code, but if system returns 0, you know the command was successful.
Perl's error variable $!. If there was a problem, Perl will set $!, which may or may not be helpful.
To summarize, try:
my $ec = system("command.pl >> /var/log/outlog");
if ($ec != 0) {
warn "exit code was $ec, \$! is $!";
}
Update: if multiple instance of the command keep showing up in your ps output, then it sounds like the program is forking and running itself in the background. If that is indeed what the command is supposed to do, then what you do NOT want to do is run this command in an endless loop.
Perhaps when run from a deamon the "system" command is using a different shell than the one used when you are running as yourself. Maybe the shell used by the daemon does not recognize the >& construct.
Instead of system("..."), try exec("...") function if that works for you.
I have a perl script (part of the XMLTV family of "grabbers", specifically tv_grab_oztivo).
I can successfully run it like this:
/sw/bin/perl /path/to/tv_grab_oztivo --output /path/to/tv.xml
I use the full paths to everything to eliminate issues with the Working Directory. Permissions shouldn't be a problem.
So, if I run it from the Terminal (Mac OSX) it works just fine.
But when I set it to run via a cron job, nothing appears to happen at all. No output is created etc.
There isn't anything wrong with the crontab as far as I can see, because if I substitute a helloworld.pl for the actual script, it runs just fine at the right time.
So, what can I do to debug? I can see from looking at %ENV in the two cases that the environment is very different, but what other approaches can I take to debugging? How can I see the output of the cron job, which might be some kind of perl "die" message or "not found" message from the shell or whatever?
Or should I be trying to somehow give the cron version of the command the same environment as when it's running as me?
It's often because you don't get the full environment when running under cron. Best bet is to capture the ouput by using the command:
( /sw/bin/perl /path/to/tv_grab_oztivo ... ) >/tmp/qq 2>&1
and then have a look at /tmp/qq.
If it does turn out to be a missing environment, then you may need to put:
. ~/.profile
or something similar, into the execution chain of your cron job, such as:
( . ~/.profile ; /sw/bin/perl /path/to/tv_grab_oztivo ... ) >/tmp/qq 2>&1
If you're looking at %ENV in the two cases, I'd suggest that, as a first step in your perl script, set %ENV to what it is in a cron job, and then trying to run it from the command line. You may need to exec yourself once for this to take full control:
BEGIN {
if (exists $ENV{something_in_your_env_not_in_cron}) {
%ENV = (...);
exec $^X, $0, #ARGV;
}
}
Now try running it, and seeing if there's anything you can do to debug it (including running under perl -d if required). Most likely, you'll find that you end up adding items back into %ENV one at a time until it magically starts working (LD_LIBRARY_PATH is a good one for this, but ORACLE_HOME or DB2HOME for Oracle or DB2 apps might be good choices, too). Then you can either set the variable in your script, or in the crontab.
I'd run a simple shell script by absolute path from the cron command.
Inside that script, I'd ensure that I trapped stdout and stderr to a known (or knowable) file. I'd also ensure that enough of your environment is set. On Unix, you get almost no environment set at all when you run a command via cron - I'm not sure about MacOS X. The standard culprit for problems is PATH. I have a separate .cronfile that sets my working environment enough that I usually don't have problems - that's an analogue of .profile.
On occasion if you can't figure out what's going wrong with your command line, the simplest way to fix it is to turn the whole thing into a shell script. Ideally you shouldn't have to do this, but it can be the fastest way to solve the problem.
File: /files/cron1.sh
#!/bin/sh
/sw/bin/perl /path/to/tv_grab_oztivo --output /path/to/tv.xml
And then in cron:
/files/cron1.sh
This allows you to test the script independent of cron. Remember though that your login shell runs with different environment variables than cron does.
cron usually captures the output of stdout and stderr and e-mailes any output to the crontab owner.
Did you double check your crontab entry to make sure it's valid and will execute at the right time?
Make sure that the script does not need any environment variables set. Otherwise wrap it in another (bash) script, where you can set the environment variables that the other script expects.