I have a module in the parent directory of my script and I would like to 'use' it.
If I do
use '../Foo.pm';
I get syntax errors.
I tried to do:
push #INC, '..';
use EPMS;
and .. apparently doesn't show up in #INC
I'm going crazy! What's wrong here?
use takes place at compile-time, so this would work:
BEGIN {push #INC, '..'}
use EPMS;
But the better solution is to use lib, which is a nicer way of writing the above:
use lib '..';
use EPMS;
In case you are running from a different directory, though, the use of FindBin is recommended:
use FindBin; # locate this script
use lib "$FindBin::RealBin/.."; # use the parent directory
use EPMS;
There are several ways you can modify #INC.
set PERL5LIB, as documented in perlrun
use the -I switch on the command line, also documented in perlrun. You can also apply this automatically with PERL5OPT, but just use PERL5LIB if you are going to do that.
use lib inside your program, although this is fragile since another person on a different machine might have it in a different directory.
Manually modify #INC, making sure you do that at compile time if you want to pull in a module with use. That's too much work though.
require the filename directly. While this is possible, it doesn't allow that filename to load files in the same directory. This would definitely raise eyebrows in a code review.
Personally I prefer to keep my modules (those that I write for myself or for systems I can control) in a certain directory, and also to place them in a subdirectory. As in:
/www/modules/MyMods/Foo.pm
/www/modules/MyMods/Bar.pm
And then where I use them:
use lib qw(/www/modules);
use MyMods::Foo;
use MyMods::Bar;
As an aside.. when it comes to pushing, I prefer the fat-arrow comma:
push #array => $pushee;
But that's just a matter of preference.
'use lib' is the answer, as #ephemient mentioned earlier. One other option is to use require/import instead of use. It means the module wouldn't be loaded at compile time, but instead in runtime.
That will allow you to modify #INC as you tried there, or you could pass require a path to the file instead of the module name. From 'perldoc -f require':
If EXPR is a bareword, the require assumes a ".pm" extension and
replaces "::" with "/" in the filename for you, to make it easy to
load standard modules. This form of loading of modules does not risk
altering your namespace.
You have to have the push processed before the use is -- and use is processed early. So, you'll need a BEGIN { push #INC, ".."; } to have a chance, I believe.
As reported by "perldoc -f use":
It is exactly equivalent to
BEGIN { require Module; import Module LIST; }
except that Module must be a bareword.
Putting that another way, "use" is equivalent to:
running at compile time,
converting the package name to a file name,
require-ing that file name, and
import-ing that package.
So, instead of calling use, you can call require and import inside a BEGIN block:
BEGIN {
require '../EPMS.pm';
EPMS->import();
}
And of course, if your module don't actually do any symbol exporting or other initialization when you call import, you can leave that line out:
BEGIN {
require '../EPMS.pm';
}
Some IDEs don't work correctly with 'use lib', the favored answer. I found 'use lib::relative' works with my IDE, JetBrains' WebStorm.
see POD for lib::relative
The reason it's not working is because what you're adding to #INC is relative to the current working directory in the command line rather than the script's directory.
For example, if you're currently in:
a/b/
And the script you're running has this URL:
a/b/modules/tests/test1.pl
BEGIN {
unshift(#INC, "..");
}
The above will mean that .. results in directory a/ rather than a/b/modules.
Either you must change .. to ./modules in your code or do a cd modules/tests in the command line before running the script again.
Related
I don't know how to do one thing in Perl and I feel I am doing something fundamentally wrong.
I am doing a larger project, so I split the task into different modules. I put the modules into the project directory, in the "modules/" subdirectory, and added this directory to PERL5LIB and PERLLIB.
All of these modules use some configuration, saved in external file in the main project directory - "../configure.yaml" if you look at it from the module file perspective.
But, right now, when I use module through "use", all relative paths in the module are taken as from the current directory of the script using these modules, not from the directory of the module itself. Not even when I use FindBin or anything.
How do I load a file, relative from the module path? Is that even possible / advisable?
Perl stores where modules are loaded from in the %INC hash. You can load things relative to that:
package Module::Foo;
use File::Spec;
use strict;
use warnings;
my ($volume, $directory) = File::Spec->splitpath( $INC{'Module/Foo.pm'} );
my $config_file = File::Spec->catpath( $volume, $directory, '../configure.yaml' );
%INC's keys are based on a strict translation of :: to / with .pm appended, even on
Windows, VMS, etc.
Note that the values in %INC may be relative to the current directory if you put relative directories in #INC, so be careful if you change directories between the require/use and checking %INC.
The global %INC table contains an entry for every module you have use'd or require'd, associated with the place that Perl found that module.
use YAML;
print $INC{"YAML.pm"};
>> /usr/lib/perl5/site_perl/5.8/YAML.pm
Is that more helpful?
There's a module called File::ShareDir that exists to solve this problem. You were on the right track trying FindBin, but FindBin always finds the running program, not the module that's using it. ShareDir does something quite similar to ysth's solution, except wrapped up in a nice interface.
Usage is as simple as
my $filename = File::ShareDir::module_file(__PACKAGE__,
'my/data.txt');
# and then open $filename or whatever else.
or
my $dirname = File::ShareDir::module_dir(__PACKAGE__);
# Play ball!
Change your use Module call to require Module (or require Module; Module->import(LIST)). Then use the debugger to step through the module loading process and see where Perl thinks it is loading the files from.
I'm new to both Perl and its feature of .lib.
I made this simple subroutine and saved it as a file with an extension of .lib
sub shorterline {
print "Content-type: text/html\n\n";
}
1;
As I tried to insert the subroutine into the Perl file with an extension of .cgi below, it doesn't work somehow:
#!/usr/bin/perl
require 'mysubs.lib';
&shorterline;
print "Hello, world!";
I gave the .cgi the chmod permission, but the .cgi still doesn't work, what seem to be the problem ?
Your descriptions of what the problem is are rather unclear.
it doesn't work somehow
the .cgi still doesn't work
Without knowing what problems you're seeing, it's hard to know what the problem is. But I tried copying your code and running the program from the command line and I got this error message:
Can't locate mysubs.lib in #INC (#INC contains: ...)
So I think you are using a recent version of Perl and are running up against this change:
Removal of the current directory (".") from #INC
The perl binary includes a default set of paths in #INC. Historically it has also included the current directory (".") as the final entry, unless run with taint mode enabled (perl -T). While convenient, this has security implications: for example, where a script attempts to load an optional module when its current directory is untrusted (such as /tmp), it could load and execute code from under that directory.
Starting with v5.26, "." is always removed by default, not just under tainting. This has major implications for installing modules and executing scripts.
If this is the problem, then you can fix it by adding the script's directory to #INC as follows:
use FindBin qw( $RealBin );
use lib $RealBin;
before your call to require. If that doesn't solve your problem, perhaps you would consider sharing a little more detail about the problems that you are experiencing.
Is there any way to report the missing modules used in the Perl file beforehand instead of getting as an error.
I have something like use Digest::MD5, use File::DosGlob modules in my Perl program. Whenever the users run the script they are getting an error if there is no specific module installed in their system. They could not understand the default error message given by #INC. So I would like to clearly tell them that these modules need to be installed to run the script.
You could build your own verification by using a BEGIN block. Those are run at compile time, just like use is. Keep in mind that use Foo is essentially nothing else as this:
BEGIN {
require Foo;
Foo->import;
}
The following code will replace all use statements with a single BEGIN and place them inside an eval. That's essentially like a try/catch mechanism.
We need the string eval (which is considered evil around here) because require only converts from package names with colons :: to paths if the argument is a bareword. But because we have the name in $module, it's a string, so we need to place it into an eval according to require's docs.
If that string eval fails, we die. That's caught by the outer eval block and $# is set. We can then check if it contains our module name, in which case we naively assume the failure was because that module is not installed. This check could be a bit more elaborate.
We keep track of any failures in $fails, and if there were any, we stop.
#!/usr/bin/perl
use strict;
use warnings;
# all our use statements go here
BEGIN {
my $fails;
foreach my $module ( qw/Digest::MD5 File::DosGlob ASDF/ ) {
eval {
eval "require $module" or die; # because $module is not a bareword
$module->import;
};
if ($# && $# =~ /$module/) {
warn "You need to install the $module module";
$fails++;
}
}
exit if $fails;
}
# ...
Above I included ASDF, which I don't have, so when run it will say
You need to install the ASDF module at /home/code/scratch.pl line 1335.
You might want to make that message a bit more verbose. If your users are not able to understand the default error message that Perl gives when it cannot find a module, it might be wise to include a guide on how to install stuff right there.
Note that both modules you listed have been included with Perl for a while (read: since March 2002). So why would you want to do this for those modules?
$ corelist Digest::MD5
Data for 2014-09-14
Digest::MD5 was first released with perl v5.7.3
$ corelist File::DosGlob
Data for 2014-09-14
File::DosGlob was first released with perl 5.00405
A better way would be ship your program as a distribution that can be installed, and include a Makefile or a cpanfile or something similar that lists dependencies. There is a guide in perlnewmod on how to start a new module. You'd not want to upload to CPAN obviously, but the basics are the same.
With this, your users would get all dependencies installed automatically.
You could use Devel::Modlist, it will list all the required module for your program.
perl -d:Modlist test.pl
There's another module Module::ScanDeps which comes with a utility scandeps.pl which you can use on your script as:
scandeps.pl test.pl
Note that sanity checking your Perl code using perl -c is dangerous, so use it carefully.
Your question isn't really clear about what "beforehand" means. To check if a Perl program's syntax is correct and directly included modules are resolvable, use
perl -c <perl-program.pl>
This checks the syntax of your file and ensures that any modules used by your code exist. However, it does not transitively check the entire dependency tree, only those mentioned in perl-program.pl.
I want to create a module in Perl. The below code is not working properly. I want to create a word count module and I want to reuse it further. Can anyone help me out to create this module? This is my first attempt to create a module so kindly help me out.
package My::count
use Exporter qw(import);
our #Export_ok = qw(line_count);
sub line_count {
my $line = #_;
return $line;
}
I saved the above code in count.pm
use My::count qw(line_count);
open INPUT, "<filename.txt";
$line++;
print line count is $line \n";
I saved the above script in .pi extension.
This code is showing error when I run it on an Ubuntu platform. Kindly help me to fix this errors.
Perl scripts are stored with .pl extension. As you say use My::count qw(line_count); Perl tries to search the modules from the directories stored in #INC variable. You can run it with the -I flag to specify the directory to search the custom packages. Refer to this question for more info.
By convention Perl packages usually have a capitalized first letter, so My::count is more in keeping with convention if you call it package My::Count;. Typically lower-cased module names are reserved for pragmas such as 'strict' and 'warnings'. So go ahead and change the name to My::Count.
Next, save the module in a path such as lib/My/Count.pm. lib is by convention as well.
Then you have to tell your script where to find package My::Count.
Let's assume you're storing your module and your executable like this:
~/project/lib/My/Count.pm
~/project/bin/count.pl
Notice I also used a .pl extension for the executable. This is another convention. Often on Unix-like systems people omit the .pl extension altogether.
Finally, in your count.pl file you need to tell perl where to find the library. Often that is done like this:
#!/usr/bin/env perl
use strict;
use warnings;
use FindBin qw($Bin);
use lib "$Bin/../lib";
use My::Count 'line_count';
# The rest goes here...
As you can see, we're using FindBin to locate where the executable is stored, and then telling perl that it should look (among other places) in the lib folder stored in a relative location to the executable.
Naturally, as this is Perl, this is not the only way to do it. But it's one common idiom.
You need to move your count.pm file into a directory called My. So you have the following.
./count.pl
./My/count.pm
Do you think changing directories inside bash or Perl scripts is acceptable? Or should one avoid doing this at all costs?
What is the best practice for this issue?
Like Hugo said, you can't effect your parent process's cwd so there's no problem.
Where the question is more applicable is if you don't control the whole process, like in a subroutine or module. In those cases you want to exit the subroutine in the same directory as you entered, otherwise subtle action-at-a-distance creeps in which causes bugs.
You can to this by hand...
use Cwd;
sub foo {
my $orig_cwd = cwd;
chdir "some/dir";
...do some work...
chdir $orig_cwd;
}
but that has problems. If the subroutine returns early or dies (and the exception is trapped) your code will still be in some/dir. Also, the chdirs might fail and you have to remember to check each use. Bleh.
Fortunately, there's a couple modules to make this easier. File::pushd is one, but I prefer File::chdir.
use File::chdir;
sub foo {
local $CWD = 'some/dir';
...do some work...
}
File::chdir makes changing directories into assigning to $CWD. And you can localize $CWD so it will reset at the end of your scope, no matter what. It also automatically checks if the chdir succeeds and throws an exception otherwise. Sometimes it use it in scripts because it's just so convenient.
The current working directory is local to the executing shell, so you can't affect the user unless he is "dotting" (running it in the current shell, as opposed to running it normally creating a new shell process) your script.
A very good way of doing this is to use subshells, which i often do in aliases.
alias build-product1='(cd $working-copy/delivery; mvn package;)'
The paranthesis will make sure that the command is executed from a sub-shell, and thus will not affect the working directory of my shell. Also it will not affect the last-working-directory, so cd -; works as expected.
I don't do this often, but sometimes it can save quite a bit of headache. Just be sure that if you change directories, you always change back to the directory you started from. Otherwise, changing code paths could leave the application somewhere it should not be.
For Perl, you have the File::pushd module from CPAN which makes locally changing the working directory quite elegant. Quoting the synopsis:
use File::pushd;
chdir $ENV{HOME};
# change directory again for a limited scope
{
my $dir = pushd( '/tmp' );
# working directory changed to /tmp
}
# working directory has reverted to $ENV{HOME}
# tempd() is equivalent to pushd( File::Temp::tempdir )
{
my $dir = tempd();
}
# object stringifies naturally as an absolute path
{
my $dir = pushd( '/tmp' );
my $filename = File::Spec->catfile( $dir, "somefile.txt" );
# gives /tmp/somefile.txt
}
I'll second Schwern's and Hugo's comments above. Note Schwern's caution about returning to the original directory in the event of an unexpected exit. He provided appropriate Perl code to handle that. I'll point out the shell (Bash, Korn, Bourne) trap command.
trap "cd $saved_dir" 0
will return to saved_dir on subshell exit (if you're .'ing the file).
mike
Consider also that Unix and Windows have a built in directory stack: pushd and popd. It’s extremely easy to use.
Is it at all feasible to try and use fully-quantified paths, and not make any assumptions on which directory you're currently in? e.g.
use FileHandle;
use FindBin qw($Bin);
# ...
my $file = new FileHandle("< $Bin/somefile");
rather than
use FileHandle;
# ...
my $file = new FileHandle("< somefile");
This will probably be easier in the long run, as you don't have to worry about weird things happening (your script dying or being killed before it could put the current working directory back to where it was), and is quite possibly more portable.