I want to learn DOCUMENT_ROOT in startup.pl, but the best I can do is to learn server_root:
use Apache2::ServerUtil ();
$server_root = Apache2::ServerUtil::server_root();
which is quite useless. I can set an environment variable with
SetPerlEnv DOCUMENT_ROOT /path/to/www
but I don't like extra configuration if possible.
Is there a way to get DOCUMENT_ROOT by other means?
See Apache2::Directive. For example, on my development system:
use Apache2::Directive ();
my $tree = Apache2::Directive::conftree();
my $vhost = $tree->lookup(VirtualHost => 'unur.localdomain:8080');
File::Slurp::write_file "C:/bzzzt.txt", [ $vhost->{DocumentRoot}, "\n" ];
created a file C:/bzzzt.txt with the contents "E:/srv/unur/deploy/htdocs" after I discovered that I had to specify my virtual hosts using
<VirtualHost unur.localdomain:8080>
...
</VirtualHost>
<VirtualHost qtau.localdomain:8080>
...
</VirtualHost>
rather than <VirtualHost *:8080>. Otherwise, each <VirtualHost *:8080> section was overwriting the previous one.
This is annoying. I would have thought each VirtualHost entry would have been keyed by the ServerName used.
As for if there is an easier way, I am afraid there isn't if you want to do this in startup.pl. However, I am not sure if it is necessary to do it in startup.pl. You can find out the document root while processing a request as well using Apache2::RequestUtil::document_root.
If you are running Registry scripts, and want to change to DOCUMENT_ROOT, then you should be able to add:
chdir $ENV{DOCUMENT_ROOT}
or die "Cannot chdir to '$ENV{DOCUMENT_ROOT}': $!";
to the script instead of having to mess around with startup.pl and handlers etc.
Related
I'm trying to use Perl's do EXPR function as a poor man's config parser, using a second .pl file that just returns a list as configuration information. (I think this is probably the ideal use for do, not least because I can write "do or die" in my code.) Here's an example:
main.pl
# Go read the config file
my %config = do './config.pl';
# do something with it
$web_object->login($config{username}, $config{password});
config.pl
# Configuration file for main script
(
username => "username",
password => "none_of_your_business",
favorite_color => "0x0000FF",
);
Reading Perldoc for do gives a lot of helpful advice about relative paths - searching #INC and modifying %INC, special warnings about 5.26 not searching "." any more, etc. But it also has these bits:
# load the exact specified file (./ and ../ special-cased)...
Using do with a relative path (except for ./ and ../), like...
And then it never actually bothers to explain the Special Case path handling for "./" or "../" - an important omission!
So my question(s) are all variations on "what really happens when you do './file.pl';"? For instance...
Does this syntax still work in 5.26, though CWD is removed from #INC?
From whose perspective is "./" anyway: the Perl binary, the Perl script executed, CWD from the user's shell, or something else?
Are there security risks to be aware of?
Is this better or worse than modifying #INC and just using a base filename?
Any insight is appreciated.
OK, so - to start with, I'm not sure your config.pl is really the right approach - it's not perl for starters, because it doesn't compile. Either way though, trying to evaluate stuff to 'parse config' isn't a great plan generally - it's rather prone to unpleasant glitches and security flaws, so should be reserved for when it's needed.
I would urge you to do it differently by either:
Write it as a module
Something like this:
package MyConfig;
# Configuration file for main script
our %config = (
username => "username",
password => "none_of_your_business",
favorite_color => "0x0000FF",
);
You could then in your main script:
use MyConfig; #note - the file needs to be the same name, and in #INC
and access it as:
print $MyConfig::config{username},"\n";
If you can't put it in the existing #INC - which there may be reasons you can't, FindBin lets you use paths relative to your script location:
use FindBin;
use lib "$FindBin::Bin";
use MyConfig;
Write your 'config' as a defined parsable format, rather than executable code.
YAML
YAML is very solid for a config file particularly:
use YAML::XS;
open ( my $config_file, '<', 'config.yml' ) or die $!;
my $config = Load ( do { local $/; <$config_file> });
print $config -> {username};
And your config file looks like:
username: "username"
password: "password_here"
favourite_color: "green"
air_speed_of_unladen_swallow: "african_or_european?"
(YAML also supports multi-dimensional data structures, arrays etc. You don't seem to need these though.)
JSON
JSON based looks much the same, just the input is:
{
"username": "username",
"password": "password_here",
"favourite_color": "green",
"air_speed_of_unladen_swallow": "african_or_european?"
}
You read it with:
use JSON;
open ( my $config_file, '<', 'config.json' ) or die $!;
my $config = from_json ( do { local $/; <$config_file> });
Using relative paths to config:
You don't have to worry about #INC at all. You can simply use based on relative path... but a better bet would be to NOT do that, and use FindBin instead - which lets you specify "relative to my script path" and that's much more robust.
use FindBin;
open ( my $config_file, '<', "$FindBin::Bin/config.yml" ) or die $!;
And then you'll know you're reading the one in the same directory as your script, no matter where it's invoked from.
specific questions:
From whose perspective is "./" anyway: the Perl binary, the Perl script executed, CWD from the user's shell, or something else?
Current working directory passes down through processes. So user's shell by default, unless the perl script does a chdir
Are there security risks to be aware of?
Any time you 'evaluate' something as if it were executable code (and EXPR can be) there's a security risk. It's probably not huge, because the script will be running as the user, and the user is the person who can tamper with CWD. The core risks are:
user is in a 'different' directory where someone else has put a malicious thing for them to run. (e.g. imagine of 'config.pl' had rm -rf /* in it for example). Maybe there's a 'config.pl' in /tmp that they 'run' accidentally?
The thing you're evaling has a typo, and breaks the script in funky and unexpected ways. (E.g. maybe it redefines $[ and messes with program logic henceforth in ways that are hard to debug)
script does anything in a privileged context. Which doesn't appear to be the case, but see the previous point and imagine if you're root or other privileged user.
Is this better or worse than modifying #INC and just using a base filename?
Worse IMO. Actually just don't modify #INC at all, and use a full path, or relative one using FindBin. And don't eval things when it's not necessary.
I have run this Perl code:
#!/usr/bin/perl
print "content-type: text/html \n\n";
print "Hello World.\n";
I have tried it in two ways, first one is Testing your Perl installation, but when I run by this way, it has some troubles, it asks me to choose a program with which I can run it, but no running yet.
Second way is first script with Padre, the Perl IDE, but when I write Perl code and try to save it, it does not show me Perl file's extension, so I can't save it as Perl file, so what could I do?
Your code looks like you want a CGI program. CGI means that you call your program through a web browser and get a website back. While vstm's comment was of course right for non-cgi programs, your example requires a little more stuff in order to work like that.
You will need to install a web server. Take a look at xampp. It is simple to install and maintain and comes with a mysql as well as an apache installation. I recommend the lite version since that does not have all the overhead.
Once you've installed it, you need to make some configuration so it can run your perl scripts. I take it you have already installed Active Perl. You then need to tweak the apache config.
in c:\xampp\apache\conf\httpd.conf you need to find the line that says
<Directory "C:/xampp/htdocs">
and read the comments (marked with #). You have to add ExecCGI inside the <Directory> section. Do that for every directory you want perl scripts to be run. Then look for a line that says
AddHandler cgi-script .cgi .pl .asp
and make sure it is not commented out.
Once you're done, place your program in the c:\xampp\htdocs folder (cgi-bin should also work) and change the shebang-line (the first line with #!) to where you've installed Active Perl, e.g. C:\perl\bin\perl.exe. It tells apache what program it should use to execute the perl script.
Also, add some more lines to your code:
#!C:\perl\bin\perl.exe
use strict;
use warnings;
use CGI;
use CGI::Carp('fatalsToBrowser');
print "Content-type: text/html \n\n";
print "Hello World.\n";
Now you need to run the apache web server. In the xampp installation dir there are several batch files that control apache and mysql. There's also a xampp-control.exe. Run it. In the new window, click on the Start button next to Apache.
In your browser, go to http://localhost/<yourscript.pl>. It should now say "Hello World!".
If it does not, make sure you're not running Skype. It blocks your port 80 which the apache tries to run on. You need to change apache's port to something else. Refer to this video.
A few words on the changes in the code I made and what they do:
use strict; should always be in your code. It forces you to honor certain guidelines and to write better code. This might seem strange in a Hello World program, but please do it anyway.
use warnings; tells you about things that might go wrong. Warnings are not errors but rather perl being helpful about stuff you might not know yourself. Use it.
use CGI makes the program's output go to the web server. You need that when you work with CGI programs.
print "Content-type: text/html \n\n"; is needed so the browser knows what to expect. In this case, an HTML website. It is called the HTTP-Header and contains a mime-type.
use CGI::Carp('fatalsToBrowser'); makes errors go to the browser. Without it, you'd never know about them unless you look in apache's error log.
I am trying to run this script
mail.pl
#!/usr/bin/perl
print "Content-type: text/html \n\n"; #HTTP HEADER
#CREATE
$email = "pt-desu\#*****-****.com";
#PRINT
print "$email<br />";
The file could be accessed via http://mysite.com/p/cgi-bin/mail.pl but when you go there the browser proms to download the file and does not display anything
Your webserver isn't set up to process *.pl files as Perl; it's instead just serving them up as plain-text. Consult your webserver's documentation for setting this up.
For Apache, try consulting this tutorial. The key bits are:
AddHandler cgi-script .cgi .pl
and
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
and
Options FollowSymLinks +ExecCGI
...in your httpd.conf file.
I am fairly new with perl and apache and seem to be having a small problem with my code.
I have 3 files:
hw.pm
package hw;
sub calc {
my $num1 = shift;
my $num2 = shift;
return $num1 + $num2;
}
1;
startup.pl
use lib qw(path to where hw.pm is located);
1;
hel.pl
#!/usr/bin/perl -w
use hw;
use CGI qw(:standard);
print header;
my $ans = calc(5,4);
print $ans;
I have no problem restarting apache but when I access hel.pl from the browser I get an error Can't locate hw.pm in #INC
Should the startup.pl have already included it in #INC? Or am I missing something?
I am using perl v5.10.1 and Apache2 v2.2.16
Perl is not finding hw.pm.
Try copying this line from startup.pl
use lib qw(path to where hw.pm is located);
to hel.pl, replacing the "use hw;" there. But first make sure the path is correct.
#INC - The array #INC contains the list
of places to look for Perl scripts to
be evaluated by the do EXPR , require
, or use constructs. It initially
consists of the arguments to any -I
command line switches, followed by the
default Perl library, probably
"/usr/local/lib/perl", followed by
".", to represent the current
directory.
I managed to solve it. initially i had this in my apache2.conf:
PerlRequire startup.pl
but after adding this code:
<Directory /var/www>
SetHandler perl-script
PerlResponseHandler ModPerl::Registry
PerlOptions +ParseHeaders
Options +ExecCGI
</Directory>
I was able to access my modules from hel.pl
Thanks guys for your help.
Do you think changing directories inside bash or Perl scripts is acceptable? Or should one avoid doing this at all costs?
What is the best practice for this issue?
Like Hugo said, you can't effect your parent process's cwd so there's no problem.
Where the question is more applicable is if you don't control the whole process, like in a subroutine or module. In those cases you want to exit the subroutine in the same directory as you entered, otherwise subtle action-at-a-distance creeps in which causes bugs.
You can to this by hand...
use Cwd;
sub foo {
my $orig_cwd = cwd;
chdir "some/dir";
...do some work...
chdir $orig_cwd;
}
but that has problems. If the subroutine returns early or dies (and the exception is trapped) your code will still be in some/dir. Also, the chdirs might fail and you have to remember to check each use. Bleh.
Fortunately, there's a couple modules to make this easier. File::pushd is one, but I prefer File::chdir.
use File::chdir;
sub foo {
local $CWD = 'some/dir';
...do some work...
}
File::chdir makes changing directories into assigning to $CWD. And you can localize $CWD so it will reset at the end of your scope, no matter what. It also automatically checks if the chdir succeeds and throws an exception otherwise. Sometimes it use it in scripts because it's just so convenient.
The current working directory is local to the executing shell, so you can't affect the user unless he is "dotting" (running it in the current shell, as opposed to running it normally creating a new shell process) your script.
A very good way of doing this is to use subshells, which i often do in aliases.
alias build-product1='(cd $working-copy/delivery; mvn package;)'
The paranthesis will make sure that the command is executed from a sub-shell, and thus will not affect the working directory of my shell. Also it will not affect the last-working-directory, so cd -; works as expected.
I don't do this often, but sometimes it can save quite a bit of headache. Just be sure that if you change directories, you always change back to the directory you started from. Otherwise, changing code paths could leave the application somewhere it should not be.
For Perl, you have the File::pushd module from CPAN which makes locally changing the working directory quite elegant. Quoting the synopsis:
use File::pushd;
chdir $ENV{HOME};
# change directory again for a limited scope
{
my $dir = pushd( '/tmp' );
# working directory changed to /tmp
}
# working directory has reverted to $ENV{HOME}
# tempd() is equivalent to pushd( File::Temp::tempdir )
{
my $dir = tempd();
}
# object stringifies naturally as an absolute path
{
my $dir = pushd( '/tmp' );
my $filename = File::Spec->catfile( $dir, "somefile.txt" );
# gives /tmp/somefile.txt
}
I'll second Schwern's and Hugo's comments above. Note Schwern's caution about returning to the original directory in the event of an unexpected exit. He provided appropriate Perl code to handle that. I'll point out the shell (Bash, Korn, Bourne) trap command.
trap "cd $saved_dir" 0
will return to saved_dir on subshell exit (if you're .'ing the file).
mike
Consider also that Unix and Windows have a built in directory stack: pushd and popd. It’s extremely easy to use.
Is it at all feasible to try and use fully-quantified paths, and not make any assumptions on which directory you're currently in? e.g.
use FileHandle;
use FindBin qw($Bin);
# ...
my $file = new FileHandle("< $Bin/somefile");
rather than
use FileHandle;
# ...
my $file = new FileHandle("< somefile");
This will probably be easier in the long run, as you don't have to worry about weird things happening (your script dying or being killed before it could put the current working directory back to where it was), and is quite possibly more portable.