Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I am checking an existing perl script for vulnerabilities. Most perl-related vulnerabilities only apply to certain versions. Is there a way to determine which version was used to write the perl script? (I cannot ask the developer.) Thank you!
There can be some indications as to the minimum Perl version that a given script is targeting. For example:
When an explicit version requirement is declared, e.g. use v5.12.
When explicit features are used, e.g. use feature 'say'. See perldoc feature for the relationship between Perl versions and available features.
When particular syntax is used. For example:
the // defined-or operator makes it easy notice post-5.10 code.
push or pop with a scalar argument signify 5.14 to 5.24.
Regex syntax also tends to have occasional changes, like /d, /u, /a, /l modifiers in 5.14 or /n in 5.22.
You will have to read the perldelta documents for the list of relevant changes.
The Perl::MinimumVersion module can help with automating some of those checks.
However, the presence or absence of such features has nothing to do with the vulnerabilities of the given program. Instead:
Make sure that you are running the script with an up to date Perl interpreter. Do not keep running unsupported old versions.
Except for rare edge cases, new Perl versions are extremely backwards compatible and should work without any issue. But note that upgrading Perl involves reinstalling all modules (in particular, any XS modules must be recompiled).
Make sure you are also looking at the dependencies of the script. Are they also up to date? Whether new versions are similarly compatible will depend on the specific module.
Related
I'm not really familiar with Perl, but I've been searching in the documentation and other sources without success for the last 2 days. In the documentation, it is written:
Perl v5.18 includes support for multiple hash functions, and changed the default (to ONE_AT_A_TIME_HARD), you can choose a different algorithm by defining a symbol at compile time. For a current list, consult the INSTALL document. Note that as of Perl v5.18 we can only recommend use of the default or SIPHASH. All the others are known to have security issues and are for research purposes only.
The thing is that neither in INSTALL document nor in other sources/sites etc. I can find how to define this symbol.
What I want to do is to change the default ONE_AT_A_TIME_HARD hash function to ONE_AT_A_TIME_OLD so I can simulate the old Perl 5.16 behavior.
This sounds like an XY problem. What are you trying to accomplish by forcibly downgrading the hash algorithm in perl to one that has known problems?
From comments:
I need to run a lot of test cases written in perl 5.16 whose functionality depends on the old hash implementation and it's quite impossible to change the code as the cases are hundreds.
Whew, that's bad news. Find those developers, and hit them around the head with a copy perldata:
Hashes are unordered collections of scalar values indexed by their associated string key.
Specifically - if this is a problem for you, it means your codebase treats hashes as ordered, when they aren't and never were. (It's just they were fairly consistent before 5.18 and more random after).
From perldelta:
When encountering these changes, the key to cleaning up from them is to accept that hashes are unordered collections and to act accordingly.
See: http://blog.booking.com/hardening-perls-hash-function.html
To answer your question - if you really must:
./Configure -DPERL_HASH_FUNC_ONE_AT_A_TIME_OLD -des && make && make test
But it's a very very bad idea, because as the INSTALL file in your perl source package points out:
Note that as of Perl 5.18 we can only recommend the use of default or SIPHASH. All the others are known to have security issues and are for research purposes only.
By building your perl this way you introduce a known security flaw for every perl program using it.
Note - ONE_AT_A_TIME_HARD is the new default, so this won't change how perl 5.18 works. You may mean PERL_HASH_FUNC_ONE_AT_A_TIME_OLD
This question already has an answer here:
Using Perl modules vs. using system() calls
(1 answer)
Closed 9 years ago.
On occasion I see people calling the system grep from Perl (and other scripting languages for that matter) instead of using the built-in language facilities/libraries to parse files. I would like to encourage people to use the built-in facilities and I want to solicit some reasons as to why it is good practice to use the built-in tools. I can think of some such as
Using libraries/language facilities is faster. Performance suffers due to the overhead of executing external commands.
Sticking to language facilities is more portable.
any other reasons?
On the other side of the coin, are there ever reasons to favour using system commands instead of the built-in language facilities? On that note, if a Perl script is basically only calling external commands (e.g. custom utilities without libraries), might it be better just to make a shell script of it?
Actually, when it matters, a specialized tool can be faster.
The real gains of keeping the work in Perl are:
Portability (even between machines with the same OS).
Ease of error detection.
Flexibility in handling of errors.
Greater customizability/flexibility.
Fewer "moving parts". (Are you sure you correctly escaped everything and setup the environment correctly?)
Less expertise needed. (You don't need to know both Perl and the external tools (and their ports) to code and maintain the program.)
On that note, if a Perl script is basically only calling external commands (e.g. custom utilities without libraries), might it be better just to make a shell script of it?
Possibly. You can configure some shells to exit if any program returns an unsuccessful error code. This can make some scripts quite robust. For example, I have a couple of bash scripts featuring the line
trap 'e=$? ; echo "Error." ; exit $e' ERR
"On the other side of the coin, are there ever reasons to favour using system commands instead of the built-in language facilities? On that note, if a Perl script is basically only calling external commands (e.g. custom utilities without libraries), might it be better just to make a shell script of it?"
Risking the wrath of Perl hardliners here. But for me there is an easy reason to use system grep instead of perl grep: I know its syntax.
Same reason to use a Perl script instead of a bash script: I know how to do stuff in Perl and never bothered with bash script syntax.
And as we are talking scripts here, my main concern is getting it done fast and reliable (and readable). At work i do not have to bother with portability as all production is done on the very same system, down to the same software versions of everything for the whole product lifespan.
At home i do not have to care about lifetime or whatever either as the script most likely is single-purpose.
And in neither case i care about performance or software security as i would be using C++ or something else for commercial software or in time or memory limited scenarios.
edit: Not saying these reasons would apply to anyone, or even anyone else. But while in reality i know how to use Perls grep, i really have no idea how to write a bash script and most likely never will. Just putting a few lines in Perl is always faster for me.
Using external tools lead to do more error.
Moreover you have you to parse the results (if any) of the external command, which is an other source of error.
No need to say that it is bad in terms of security.
This question already has an answer here:
Expert system doesn't initialize
(1 answer)
Compilation of perl script failed
Closed 9 years ago.
I tried to run AI::ExpertSystem::Advanced from CGI website.
My server (xampp on localhost) have in log this error:
The system cannot find the path specified.
Unable to get Terminal Size. The Win32 GetConsoleScreenBufferInfo call didn't work. The COLUMNS and LINES environment variables didn't work. The resize program didn't work. at C:/Perl/lib/Term/ReadKey.pm line 362.
Compilation failed in require at C:/Perl/lib/Term/ReadLine/Perl.pm line 65.
How can I specify what path is bad? How can I find where is error???
The error messages in the question state that the error is thrown at C:/Perl/lib/Term/ReadKey.pm line 362 and that use Term::ReadKey appears at C:/Perl/lib/Term/ReadLine/Perl.pm line 65.
However, the comments to the question indicate that you are trying to run this code in a CGI environment. Given that the purpose of Readline is to provide additional functionality when reading a line of input from a terminal[*], it makes no sense to use it in a CGI context and I'm not at all surprised that it doesn't work there.
According to metacpan, AI::ExpertSystem::Advanced does not depend on Term::ReadLine::Perl, nor do any of its dependencies. Term::ReadLine::Perl must be getting used by some other part of your code. To resolve this problem, locate that section of code (grep -ir readline /my/source/tree) and change it to either not use Readline at all or to detect whether it is running on the command line or under CGI and only require Term::ReadLine::Perl if it's on the command line.
Edit: Tracking back through your earlier questions on this issue, I see that you're creating your ExpertSystem instance with viewer_class => 'terminal', which causes it to use AI::ExpertSystem::Advanced::Viewer::Terminal, which "Extends from AI::ExpertSystem::Advanced::Viewer::Base and its main purpose is to interact with a (console) terminal." (emphasis mine) In order to make this work, you need to use a different viewer class which does not "interact with a (console) terminal". Unfortunately, a search of metacpan finds no other available viewers, so you'll need to either find one somewhere else (the author of AI::ExpertSystem::Advanced may know where you can get one for CGI) or write your own viewer class.
[*] From the Term::ReadLine::Perl5 documentation:
GNU Readline reads lines from an interactive terminal with emacs or vi
editing capabilities. It provides as mechanism for saving history of
previous input.
This package typically used in command-line interfaces and REPLs
(Read, Eval, Print Loops).
I'm using the following command to test my perl code:
perl -MB::Lint::StrictOO -MO=Lint,all,oo -M-circular::require -M-indirect -Mwarnings::method -Mwarnings::unused -c $file
On a system with a perl version less than 5.10 I am also using uninit.
I am also using Perl::Critic and Perl::Tidy and have set up the appropriate rc files to my liking.
These modules have done a great job in helping me break some bad habits I learned when first learning perl.
Are there any more modules or pragmas that will kick me back on the straight and narrow when I mess up?
Using tests, and the Test::* family of modules and some good books have been pointed out. This new information has caused me to reconsider some assumptions about the relationship between testing and code skill building. These are all appreciated and already being researched and put to use.
It seems to me that these are two separate parts of a whole. 'perl -c', Perl::Critic and Perl::Tidy all help during the process of writing code and before execution of code. Devel::Cover, Devel::NYTProf and Tests happen during and after execution of code.
Good development dictates an iterative process, so tests will be run, and code developed over and over, but we still have this separation.
It appears to me that the focus in the answers have been on the 'during and after execution' of code. Again, this is very appreciated. Can I assume that I have the 'writing and pre-execution' part down pretty well then? At least, insomuch as the pragmas, modules and utilities are concerned.
I'm a little worried that you're using Perl 5.9. For two reasons.
Firstly it's a little old. 5.9.0 was released in 2003 and 5.9.5 (the last version in the 5.9.x series) was released in 2007. There have been several high quality versions of Perl since then.
Secondly (and most importantly), 5.9 is an unstable development version of Perl. 5.9 is basically the series of experiments that eventually led to Perl 5.10.0. The only reason to use it is to test that 5.10 will be a stable version of Perl. No-one should be using it at all now.
You don't appear to be testing your code, merely checking that it will compile. I suggest that you look at Test::More (which makes writing actual tests nice and easy), Test::Class (which makes dealing with very large test suites easier), and Devel::Cover (to see which bits of your code are covered by your tests and which aren't).
I usually have an environment setting for MAKE_MODE (Windows XP, using GNU make, both under Cygwin and native)
set MAKE_MODE=UNIX
I now found differences between my build server (which has no MAKE_MODE defined) and a local build. This may be something completely different, but it got me wondering what other values I could specify for MAKE_MODE.
I think I know that MAKE_MODE=UNIX is suppose to tell GNU make to use /bin/sh - if it finds it - , but I quickly checked the GNU make manual and couldn't find a description. A google search only told me what I already know, but doesn't give a valid alternative.
Is the only alternative to not define the variable? Does it have influence at all when using CMD.exe and a native version of GNU make?
EDIT: So far I have found references for the values 'unix', 'win32', 'null' and undefined, but no explanations, and no specifications. But a look at the source code for GNU make 3.82 shows not a single occurrence of the string "MAKE_MODE", so GNUmake itself apparently doesn't change its behavior when this environment variable is set or not.
EDIT2: I checked the source code for GNU make for MinGW, and again found nothing. Maybe it's CygWin specific?
EDIT3: I found a reference that it might be property of an old version of GNU make, so I checked version 3.75. No luck, the string MAKE_MODE does not appear in the source code at all. The next step really must be the Cygwin version of GNU make. I know from 10 years ago that the Cygwin port in those days was not integrated in the regular source tree.
I found an ancient mailing list entry on the Cygwin site, explaining the basic operational effect of MAKE_MODE. This definitely indicates that the variable has to do with the Cygwin implementation of GNU make.
I'll dig around in the source code, and add to this answer when I find more details.
UPDATE: In a more recent post by maintainer Christopher Faylor I found the following update for GNU make version 3.81:
Note that the --win32 command line option and "MAKE_MODE" environment
variable are no longer supported in Cygwin's make. If you need to use a
Makefile which contains MS-DOS path names, then please use a MinGW
version of make.
I've not really found the values allowed for MAKE_MODE, but it's not any more necessary or supported in most recent versions of GNU make for Cygwin, and it was used for supporting DOS filenames in Cygwin's make.
And if you really want to know the set of allowed values, look in the source for Cygwin's make version before 3.81-1. I guess the only useful value was unix, all others will have meant the same.
Case closed? There's still not many views here...