EDIT:
A brief overview of each module. (I'm assuming this is the right way to add more information to my post. Apologizes as this is the first time i'm posting)
A.pm - Contains re-usable routines to read the ZIP file, decrypt the content, validations etc., (Used by various CGI files, command line scripts and other Perl modules)
B.pm - This is the Utils file, which connects to DB, all the SQLs related subroutines, invokes C.pm to write marks into each file
C.pm - Specialized routine to mark each file within ZIP similar to a checksum (checks for file types allowed, read files, write files, examine etc.,), Uses A.pm because the module needs to decrypt content, perform validations done by A.pm
Including some sample code (i'm just posting few use lines here; obviously lot of modules are being used in .pm)
A.pm
package A;
use strict;
use warnings;
use B;
..........
B::get_database_information_for_file(..)
..........
sub validate_decrypted_mark { ...... }
sub decrypt_mark {..........}
.....
B.pm
package B;
use strict;
use warnings;
use C;
..........
C::mark_file(..)
..........
sub db_connect { ...... }
sub get_database_information_for_file {..........}
.....
C.pm
package C;
use strict;
use warnings;
use A;
..........
A::decrypt_mark(..)
..........
sub mark_file { ...... }
sub read_mark {..........}
sub write_mark {..........}
sub examine_mark {..........}
.....
Few more additional information (which could be useful)
These warnings are showing up when we recently moved from Solaris/Apache to LAMP.
We use mod_perl, so it could be possible that the module is already in memory?
=====
Hello,
I've searched on Stack Overflow and found the root cause for my issue.
Perl - Subroutine redefined
But i have a different situation than the one specified in above thread. My issue is, i get subroutine redefined error in Perl (same as the one specified in the above thread). But my question is around the circular reference and/or best practices. I've the following scenario, which is leading to subroutine redefined warning
Package A --uses-> Package B --uses-> Package C --uses-> Package A
Since Package C uses Package A, obviously i will redefined subroutine warning. But my question is, is this a bad programming practice to do this way? What are the thoughts from the best practices angle?
I cannot avoid this references as Package C need to use the subroutines defined in Package A. "Grant McLean" had a very good suggestion in the above thread for my situation given above. I wouldn't want to avoid these warnings as these could indicate some issues.
Really appreciate your time and help.
Thanking you,
Circular use shouldn't generally give you a subroutine redefined warning, unless you directly execute one of the packages instead of doing use/require. Sometimes people try to do syntax checking this way:
perl -c Foo.pm
Instead, they should do
perl -e'use Foo'
So can you share exactly what you are doing to provoke the subroutine redefined warnings?
Related
Consider the following script p.pl:
use strict;
use warnings;
use AA;
BB::bfunc();
where the file AA.pm is:
package AA;
use BB;
1;
and the file BB.pm is:
package BB;
sub bfunc {
print "Running bfunc..\n";
}
1;
Running p.pl gives output (with no warnings or errors):
Running bfunc..
Q: Why is it possible to call BB::bfunc() from p.pl even though there is no use BB; in p.pl? Isn't this odd behavior? Or are there situation where this could be useful?
(To me, it seems like this behavior only presents an information leak to another package and violates the data hiding principle.. Leading to programs that are difficult to maintain.. )
You're not polluting a namespace, because the function within BB isn't being 'imported' into your existing namespace.
They are separate, and may be referenced autonomously.
If you're making a module, then usually you'll define via Exporter two lists:
#EXPORT and #EXPORT_OK.
The former is the list of things that should be imported when you use the package. The latter is the things that you can explicity import via:
use MyPackage qw ( some_func );
You can also define package variables in your local namespace via our and reference them via $main.
our $fish = "haddock";
print $main::fish;
When you do this, you're explicitly referencing the main namespace. When you use a module, then you cause perl to go and look for it, and include it in your %INC. I then 'knows about' that namespace - because it must in order for the dependencies to resolve.
But this isn't namespace pollution, because it doesn't include anything in your namespace until your ask.
This might make a bit more sense if you have multiple packages within the same program:
use strict;
use warnings;
package CC;
our $package_var = "Blong";
sub do_something {
print $package_var,"\n";
}
package main;
use Data::Dumper;
our $package_var = "flonk";
print Dumper $package_var;
print Dumper $CC::package_var;
Each package is it's own namespace, but you can 'poke' things in another. perl will also let you do this with object - poking at the innards of instantiated objects or indeed "patch" them.
That's quite powerful, but I'd generally suggest Really Bad Style.
While it's good practice to use or require every dependency that you are planning to access (tried to avoid use here), you don't have to do that.
As long as you use full package names, that is fine. The important part is that Perl knows about the namespaces. If it does not, it will fail.
When you use something, that is equivalent to:
BEGIN {
require Foo::Bar;
Foo::Bar->import();
}
The require will take the Foo::Bar and convert it to a path according to the operating system's conventions. On Linux, it will try to find Foo/Bar.pm somewhere inside #INC. It will then load that file and make a note in %INC that it loaded the file.
Now Perl knows about that namespace. In case of the use it might import something into your own namespace. But it will always be available from everywhere after that as long as you use the full name. Just the same, stuff that you have in your main script.pl would be available inside of packages by saying main::frobnicate(). (Please don't do that!)
It's also not uncommon to bundle several namespaces/packages in one .pm module file. There are quite a few big names on CPAN that do it, like XML::Twig.
If you do that, and don't import anything, the only way to get to the stuff under the different namespaces is by using the full name.
As you can see, this is not polluting at all.
What I want to achieve:
###############CODE########
old_procedure(arg1, arg2);
#############CODE_END######
I have a huge code which has a old procedure in it. I want that the call to that old_procedure go to a call to a new procedure (new_procedure(arg1, arg2)) with the same arguments.
Now I know, the question seems pretty stupid but the trick is I am not allowed to change the code or the bad_function. So the only thing I can do it create a procedure externally which reads the code flow or something and then whenever it finds the bad_function, it replaces it with the new_function. They have a void type, so don't have to worry about the return values.
I am usng perl. If someone knows how to atleast start in this direction...please comment or answer. It would be nice if the new code can be done in perl or C, but other known languages are good too. C++, java.
EDIT: The code is written in shell script and perl. I cannot edit the code and I don't have location of the old_function, I mean I can find it...but its really tough. So I can use the package thing pointed out but if there is a way around it...so that I could parse the thread with that function and replace function calls. Please don't remove tags as I need suggestions from java, C++ experts also.
EDIT: #mirod
So I tried it out and your answer made a new subroutine and now there is no way of accessing the old one. I had created an variable which checks the value to decide which way to go( old_sub or new_sub)...is there a way to add the variable in the new code...which sends the control back to old_function if it is not set...
like:
use BadPackage; # sub is defined there
BEGIN
{ package BapPackage;
no warnings; # to avoid the "Subroutine bad_sub redefined" message
# check for the variable and send to old_sub if the var is not set
sub bad_sub
{ # good code
}
}
# Thanks #mirod
This is easier to do in Perl than in a lot of other languages, but that doesn't mean it's easy, and I don't know if it's what you want to hear. Here's a proof-of-concept:
Let's take some broken code:
# file name: Some/Package.pm
package Some::Package;
use base 'Exporter';
our #EXPORT = qw(forty_two nineteen);
sub forty_two { 19 }
sub nineteen { 19 }
1;
# file name: main.pl
use Some::Package;
print "forty-two plus nineteen is ", forty_two() + nineteen();
Running the program perl main.pl produces the output:
forty-two plus nineteen is 38
It is given that the files Some/Package.pm and main.pl are broken and immutable. How can we fix their behavior?
One way we can insert arbitrary code to a perl command is with the -M command-line switch. Let's make a repair module:
# file: MyRepairs.pm
CHECK {
no warnings 'redefine';
*forty_two = *Some::Package::forty_two = sub { 42 };
};
1;
Now running the program perl -MMyRepairs main.pl produces:
forty-two plus nineteen is 61
Our repair module uses a CHECK block to execute code in between the compile-time and run-time phase. We want our code to be the last code run at compile-time so it will overwrite some functions that have already been loaded. The -M command-line switch will run our code first, so the CHECK block delays execution of our repairs until all the other compile time code is run. See perlmod for more details.
This solution is fragile. It can't do much about modules loaded at run-time (with require ... or eval "use ..." (these are common) or subroutines defined in other CHECK blocks (these are rare).
If we assume the shell script that runs main.pl is also immutable (i.e., we're not allowed to change perl main.pl to perl -MMyRepairs main.pl), then we move up one level and pass the -MMyRepairs in the PERL5OPT environment variable:
PERL5OPT="-I/path/to/MyRepairs -MMyRepairs" bash the_immutable_script_that_calls_main_pl.sh
These are called automated refactoring tools and are common for other languages. For Perl though you may well be in a really bad way because parsing Perl to find all the references is going to be virtually impossible.
Where is the old procedure defined?
If it is defined in a package, you can switch to the package, after it has been used, and redefine the sub:
use BadPackage; # sub is defined there
BEGIN
{ package BapPackage;
no warnings; # to avoid the "Subroutine bad_sub redefined" message
sub bad_sub
{ # good code
}
}
If the code is in the same package but in a different file (loaded through a require), you can do the same thing without having to switch package.
if all the code is in the same file, then change it.
sed -i 's/old_procedure/new_procedure/g codefile
Is this what you mean?
With this code in Foo.pm:
use strict;
use warnings;
package Foo;
BEGIN {
$Foo::AUTHORITY = 'cpan:ETHER';
}
1;
Loading the file as a module gives no errors:
$ perl -I. -mFoo -e1
$
And yet, loading the file directly does:
$ perl Foo.pm
Name "Foo::AUTHORITY" used only once: possible typo at Foo.pm line 6.
Moreover, perl -e'require "Foo.pm"' also does not warn.
Why is there this difference? Clearly the file is being parsed differently,
but how and why?
"Why" from a technical point of view, or from a language design point of view?
From a language point of view, it makes sense because a variable referred to within a module may well be part of the module's public API. For example, Data::Dumper exposes a bunch of package variables that alter its behaviour. (Arguably bad design, but ho hum.) These variables might only be referred to once in the module, but can potentially be referred to from other parts of the program.
If it's only referred to in the main script once, and no modules refer to it, then it's more likely to be a mistake, so we get this warning within the script, but not in the module.
From a technical point of view, this warning is generated by gv.c. Personally I can't make head nor tail of the exact conditions under which it's triggered.
Surely the exception was made because some modules do
if ($Me::Setting) {
...
} else {
...
}
We didn't always have our and use vars (the latter depending on yet an other exception for imported symbols).
Warnings are issued with warn (Perl side) or Perl_warner (C side). The line in question is this one.
Is there a way to load entire modules during runtime in Perl? I had thought I found a good solution with autouse but the following bit of code fails compilation:
package tryAutouse2;
use autouse 'tryAutouse';
my $obj = tryAutouse->new();
I imagine this is because autouse is specifically meant to be used with exported functions, am I correct? Since this fails compilation, is it impossible to have a packaged solution? Am I forced to require before each new module invocation if I want dynamic loading?
The reasoning behind this is that my team loads many modules, but we're afraid this is eating up memory.
You want Class::Autouse or ClassLoader.
Due to too much magic, I use ClassLoader only in my REPL for convenience. For serious code, I always load classes explicitely. Jack Maney points out in a comment that Module::Load and Module::Load::Conditional are suitable for delayed loading.
There's nothing wrong with require IMO. Skip the export of the function and just call the fully qualified name:
require Some::Module;
Some::Module::some_function(#some_arguments);
eval 'use tryAutouse; 1;' or die $#;
Will work. But you might want to hide the ugliness.
When you say:
use Foo::Bar;
You're loading module Foo::Bar in at compile time. Thus, if you want to load your module in at run time, you'd use require:
require Foo::Bar;
They are sort of equivalent, but there are differences. See the Perldoc on use to understand the complete difference. For example, require used in this way won't automatically load in imported functions. That might be important to you.
If you want to test whether a module is there or not, wrap up your require statement in an eval and test whether or not eval is successful.
I use a similar technique to see if a particular Perl module is available:
eval { require Mail::Sendmail; };
if ($#) {
$watch->_Send_Email_Net_SMTP($watcher);
return;
}
In the above, I'll attempt to use Mail::Sendmail which is an optional module if it's available. If not, I'll run another routine that uses Net::SMTP:
sub _Send_Email_Net_SMTP {
my $self = shift;
my $watcher = shift;
require Net::SMTP; #Standard module: It should be available
WORD O'WARNING: You need to use curly braces around your eval statement and not parentheses. Otherwise, if the require doesn't work, your program will exit which is probably not what you want to do.
Instruction 'use' is performed at compile time, so check the path to the module also takes place at compile time. This may cause incorrect behavior, which are difficult to understand until you consider the contents of the #INC array.
One solution is to add block 'BEGIN', but the solution shown below is inelegant.
BEGIN { unshift #INC, '/path/to/module/'; }
use My::Module;
You can replace the whole mess a simple directive:
use lib '/path/to/module';
use My::Module;
This works because it is performed at compile time. So everything is ready to execute 'use' instruction.
Instead of the 'BEGIN' block, you can also decide to different instruction executed at compile time ie declaring a constant.
use constant LIB_DIR => '/path/to/module';
use lib LIB_DIR;
use My::Module;
Consider that I have 100 Perl modules in 12 directories. But, looking into the main Perl script, it looks like 100 use p1 ; use p2 ; etc. What is the to best way to solve this issue?
It seems unlikely to me that you're useing all 100 modules directly in your main program. If your program uses a function in module A which then calls a function from module B, but the main program itself doesn't reference anything in module B, then the program should only use A. It should not use B unless it directly calls anything from module B.
If, on the other hand, your main program really does talk directly to all 100 modules, then it's probably just plain too big. Identify different functional groupings within the program and break each of those groups out into its own module. The main reason for doing this is so that it will result in code that is more maintainable, flexible, and reusable, but it will also have the happy side-effect of reducing the number of modules that the main program talks to directly, thus cutting down on the number of use statements required in any one place.
(And, yes, I do realize that 100 was probably an exaggeration, but, if you're getting uncomfortable about the number of modules being used by your code, then that's usually a strong indication that the code in question is trying to do too much in one place and should be broken down into a collection of modules.)
Put all the use statements in one file, say Mods.pm:
package Mods;
use Mod1;
use Mod2;
...
and include the file in your main script:
use Mods;
I support eugene's solution, but you could group the use statements in files by topic, like:
package Math;
use ModMatrix;
use ModFourier;
...
And of course you should name the modules and the mod-collections meaningful.
Putting all of the use statements in a separate file as eugene y suggested is probably the best approach. you can minimize the typing in that module with a bit of meta programming:
package Mods;
require Exporter;
our #ISA = 'Exporter';
my #packages = qw/Mod1 Mod2 Mod3 .... /;
# or map {"Mod$_"} 1 .. 100 if your modules are actually named that way
for (#packages) {
eval "require $_" or die $#; # 'use' means "require pkg; pkg->import()"
$_->import(); # at compile time
}
our #EXPORT = grep {*{$Mods::{$_}}{CODE}} keys %Mods::; # grab imported subs
#or #EXPORT_OK