I'm working on a very large CGI application that uses Crypt::RSA, which is properly installed. I get a "attempted to call a null reference as a function" type of error (I can't go back to get the exact error right now because we had to rollback for a release date) when I try to run any the embedded library. I trace the null reference to Crypt::RSA's constructor, which uses Class::Loader to enable Crypt::RSA::ES::OAEP.
I replaced the class loader with a "use" and a "new", and that part works fine, though the library still fails in many points. Obviously something is wrong with my environment. I'm just not certain as to what. Can anyone give me any leads?
Ok, after 12 hours of digging into it, I got this working.
Here's what was going on (but not why). Whenever I called eval() on a quoted use or require statement (as occurs in Class::Loader, but also in other locations in the Crypt:: framwork), it failed to see paths that were otherwise included as Perl classpaths. Since most quoted use/require objects simply assume the class will be there, very few useful errors were thrown out at me. I would dump #INC to file, outside an eval block, and everything would be there.
Ironically, I used the same setup in dev vs staging, and it worked in dev, but not in staging. I must also point out that FindBin (I shouldn't be using it in CGI, I know, but Crypt uses it) was flailing up and down about /dev/null in staging, but not in development.
Since I can't easily compare versions or global configs, that's where my quest ends.
How I resolved the issue for myself in Crypt::RSA was to disable all commands tied to FindBin, and hard-code require references for anything my code would ever access. I did a require in Crypt::RSA for Crypt::RSA::ES::OAEP and one in Crypt::Random::Generator for Crypt::Random::Provider::rand
Hope this helps anyone in the future who has the problem. Anyone who can suggest the why of it, please respond and I'll add it to complete the post.
Related
I'm getting the following run time output:
"Class _NSZombie_GraphicPath is implemented in both ?? and ??. One of the two will be used. Which one is undefined."
Have no clue how to fix this. There are a couple of other questions that cover this, but it seems in those unit testing was involved. Has anyone ever come across this problem before and if so how was it fixed?
It implies that two images and/or static libraries export the class GraphicPath. For example, one may be your app, and the other a unit test. A library you link to could also export that class. In any event, you should review your projects' compilations phases including all dependencies, and ensure that GraphicPath.m is compiled exactly once, then remove all others. Also note that it is possible to compile the file twice for the same target. I expect that you would also see a log warning when running with zombies disabled. You can also use nm to dump an image's symbol names.
How can I perform a "shallow" syntax check on perl files. The standard perl -c is useful but it checks the syntax of imports. This is sometimes nice but not great when you work in a code repository and push to a running environment and you have a function defined in the repository but not yet pushed to the running environment. It fails checking a function because the imports reference system paths (ie. use Custom::Project::Lib qw(foo bar baz)).
It can't practically be done, because imports have the ability to influence the parsing of the code that follows. For example use strict makes it so that barewords aren't parsed as strings (and changes the rules for how variable names can be used), use constant causes constant subs to be defined, and use Try::Tiny changes the parse of expressions involving try, catch, or finally (by giving them & prototypes). More generally, any module that exports anything into the caller's namespace can influence parsing because the perl parser resolves ambiguity in different ways when a name refers to an existing subroutine than when it doesn't.
There are two problems with this:
How to not fail -c if the required modules are missing?
There are two solutions:
A. Add a fake/stub module in production
B. In all your modules, use a special catch-all #INC subroutine entry (using subs in #INC is explained here). This obviously has a problem of having the module NOT fail in real production runtime if the libraries are missing - DoublePlusNotGood in my book.
Even if you could somehow skip failing on missing modules, you would STILL fail on any use of the identifiers imported from the missing module or used explicitly from that module's namespace.
The only realistic solution to this is to go back to #1a and use a fake stub module, but this time one that has a declared and (as needed) exported identifier for every public interface. E.g. do-nothing subs or dummy variables.
However, even that will fail for some advanced modules that dynamically determine what to create in their own namespace and what to export in runtime (and the caller code could dynamically determine which subs to call - heck, sometimes which modules to import).
But this approach would work just fine for normal "Java/C-like" OO or procedural code that only calls statically named predefined public subs, methods and accesses exported variables.
I would suggest that it's better to include your code repository in your syntax check. perl -I/path/to/working/code/repo/local_perl/ -c or set PERL5LIB=/path/to/working/code/repo/local_perl/ prior to running perl -c. Either option should allow you to check against your working code, assuming you have it in a directory structure similar to your live code.
I guess you could make stubs for the missing libraries in your home folder.
Have you looked into PPI? I think it does follow imports, however it could perhaps be more easily modified to guess what looks like a function name.
I'm getting ready to try to deploy some code to multiple machines. As far as I know, using a Makefile.pm to track dependencies is the best way to ensure they are installed everywhere. The problem I have is I'm not sure our Makefile.pm has been updated as this application has passed through a few different developers.
Is there any way to automatically parse through either my source or a few full runs of my program to determine exactly what versions of what modules my application is depending on? On top of that, is there any way to filter it based on CPAN packages? (So that I only depend on Moose instead of every single module that comes with Moose.)
A third related question is, if you depend on a version of a module that is not the latest, what is the best way to have someone else install it? Should I start including entire localized Perl installations with my application?
Just to be clear - you can not generically get a list of modules that the app depends on by code analysis alone. E.g. if your apps does eval { require $module; $module->import() }, where $module is passed via command line, then this can ONLY be detected by actually running the specific command line version with ALL the module values.
If you do wish to do this, you can figure out every module used by a combination of runs via:
Devel::Cover. Coverage reports would list 100% of modules used. But you don't get version #s.
Print %INC at every single possible exit point in the code as slu's answer said. This should probably be done in END{} block as well as __DIE__ handler to cover all possible exit points, and even then may be not fully 100% covering in generic case if somewhere within the program your __DIE__ handler gets overwritten.
Devel::Modlist (also mentioned by slu's answer) - the downside compared to Devel::Cover is that it does NOT seem to be able to aggregate a database across multiple sample runs like Devel::Cover does. On the plus side, it's purpose-built, so has a lot of very useful options (CPAN paths, versions).
Please note that the other module (Module::ScanDeps) does NOT seem to allow you to do runtime analysis based on arbitrary command line arguments (e.g. it seems at first glance to only allow you to execute the program with no arguments) and if that's true, is inferior to all the above 3 methods for any code that may possibly load modules dynamically.
Module::ScanDeps - Recursively scan Perl code for dependencies
Does both static and runtime scanning. Just modules, I don't know of any exact way of verifying what versions from what distributions. You could get old packages from BackPan, or just package your entire chain of local dependencies up with PAR.
You could look at %INC, see http://www.perlmonks.org/?node_id=681911 which also mentions Devel::Modlist
I would definitely use Devel::TraceUse, which also shows a tree of the modules, so it's easy to guess where they are being loaded.
I've been assigned to pick up a webapplication written in some old Perl Legacy code, get it working on our server to later extend it. The code was written 10 years ago by a solitary self-taught developer...
The code has weird stuff going on - they are not afraid to do lib-param.pl on line one, and later in the file do /lib-pl/lib-param.pl - which is offcourse a different file.
Including a.pl with methods b() and c() and later including d.pl with methods c() and e() seems to be quite popular too... Packages appear to be unknown, so you'll just find &c() somewhere in the code later.
Interesting questions:
Is there a tool that can draw relations between perl-files? Show a list of files used by each other file?
The same for MySQL databases and tables? Can it show which schema's/tables are used by which files?
Is there an IDE that knows which c() is called - the one in a.pl or the one in d.pl?
How would you start to try to understand the code?
I'm inclined to go through each file and refactor it, but am not allowed to do that - only the strict minimum to get the code working. (But since the code never uses strict, I don't know if I'm gonna...)
Not using strict is a mistake -- don't continue it. Move the stuff in d.pl to D.pm (or perhaps a better name alltogether), and if the code is procedural use Sub::Exporter to get those subs back into the calling package. strict is lexical, you can turn it on for just one package. Such as your new package D;. To find out which code is being called, use Devel::SimpleTrace.
perl -MDevel::SimpleTrace ./foo.pl
Now any warnings will be accompanied by a full back-log -- sprinkle warnings around the code and run it.
I think the MySQL question should be removed, from this. Schema Table mappings have nothing to do with perl, it seems an out of place distraction on this question.
I would write a utility to scan a complete list of all subs and which file they live in; then I would write a utility to give me a list of all function calls and which file they come from.
By the way - it is not terribly hard to write a fairly mindless static analysis tool to generate a call graph.
For many cases, in well-written code, that will be enough to help me out...
os i figured out how to use the -mthumb and -mno-thumb compiler flag and more or less understand what it's doing.
But what is the -mthumb-interlinking flag doing? when is it needed, and is it set for the whole project if i set 'compile for thumb' in my project settings?
thanks for the info!
Open a terminal and type man gcc
Do you mean -mthumb-interwork ?
-mthumb-interwork
Generate code which supports calling between the ARM and Thumb
instruction sets. Without this option the two instruction sets
cannot be reliably used inside one program. The default is
-mno-thumb-interwork, since slightly larger code is generated when
-mthumb-interwork is specified.
If this is related to a build configuration, you should be able to set it separately for each configuration "such as Release or Debug".
Why do you want to change these settings? I know using thumb instructions save some memory but will it save enough to matter in this case?
my application uses both, thumb and vfp code but i never specifically
set -thumb-interwork flag.. how is that possible?
According to man page, without that flag the two instructions sets
cannot be reliably used inside one program.
It says "reliably"; so without that option, it seems they still can be mixed within a single program but it might be "unreliably". I think normally mixing both instructions sets works, the compiler is smart enough to figure out when it has to switch from one set to another one. However, there might be border cases the compiler just doesn't understand correctly and it might fail to see that it should switch instruction sets here, causing the application to fail (most likely it will crash). This option generates special code, so that no matter what your code does, the switching always happens correctly and reliably; the downside is that this extra code is needed for every global visible function and thus increases the binary side (I have no idea if it also might slow down function calls a little bit, I personally would expect that).
Please also note the following two settings:
-mcallee-super-interworking
Gives all externally visible functions in the file being
compiled an ARM instruction set header
which switches to Thumb mode before executing the rest of
the function. This allows these
functions to be called from non-interworking code.
-mcaller-super-interworking
Allows calls via function pointers (including virtual
functions) to execute correctly regardless
of whether the target code has been compiled for
interworking or not. There is a small overhead
in the cost of executing a function pointer if this option
is enabled.
Though I think you only need those, when building libraries to be used with other projects; but I don't know for sure. The GCC thumb handling is definitely "underdocumented".