This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
How do I implement dispatch tables in Perl?
I have a hash table that contains commands such as int(rand()) etc.
How do I execute those commands?
You can use eval($str) to execute Perl code you store in a string variable, $str. You could alternatively store your code as function references within a hash, so something like:
$hash{'random'} = sub { int(rand()) };
This way, you could write $hash{'random'}->() to execute the function whenever you want a random value.
See also Implementing Dispatch Tables on PerlMonks.
As other have said, you can execute them using eval. However, please note that executing arbitrary strings of possibly tainted origin via eval is a major security hole, as well as prone to be slow if performance of your application matters.
You can use the Safe module to remove the security hole (not sure how bulletproof that is but much better than naked eval), but performance issues will always be there as Perl will have to compile your code prior to executing it WHILE executing the main program.
Related
This question already has answers here:
How can I inline Perl subroutines?
(4 answers)
Closed 7 years ago.
I know non-constant subroutines usually will not be inlined, since they can be redefined on the fly. However, I have code where inlined subroutines would actually offer a small but non-trivial optimization; but I don't want to unroll them myself since it would make the code much harder to read.
Is there some way to make Perl inline these methods, a way to indicate that the subroutine will not be modified at runtime and I want the interpreter to inline during the preprocessing phase?
Constants can be folded, but Perl subs are never inlined. Practically, they can't be. macro and Macro attempt to provide inlinable subs, but I don't know how reliable they are. You will definitely find limitations.
This question already has answers here:
When should I use the & to call a Perl subroutine?
(4 answers)
Closed 9 years ago.
Every now and then I see Perl scripts where subroutines are called with a leading '&'.
Is this legacy, or does it give any benefit?
As calling the subroutine without the ampersand sign works as well.
sub mysub {
print "mysub\n";
}
mysub;
&mysub;
Thx/Hermann
Calling with & is generally a code smell that somebody doesn't know what they're doing and are in a Perl4 mindset. In your specific example, it works exactly the same. However, calling with & disables function prototypes, so advanced users may use it in certain circumstances. You should expect to see a comment why next to the call in that case.
This question already has an answer here:
Using Perl modules vs. using system() calls
(1 answer)
Closed 9 years ago.
On occasion I see people calling the system grep from Perl (and other scripting languages for that matter) instead of using the built-in language facilities/libraries to parse files. I would like to encourage people to use the built-in facilities and I want to solicit some reasons as to why it is good practice to use the built-in tools. I can think of some such as
Using libraries/language facilities is faster. Performance suffers due to the overhead of executing external commands.
Sticking to language facilities is more portable.
any other reasons?
On the other side of the coin, are there ever reasons to favour using system commands instead of the built-in language facilities? On that note, if a Perl script is basically only calling external commands (e.g. custom utilities without libraries), might it be better just to make a shell script of it?
Actually, when it matters, a specialized tool can be faster.
The real gains of keeping the work in Perl are:
Portability (even between machines with the same OS).
Ease of error detection.
Flexibility in handling of errors.
Greater customizability/flexibility.
Fewer "moving parts". (Are you sure you correctly escaped everything and setup the environment correctly?)
Less expertise needed. (You don't need to know both Perl and the external tools (and their ports) to code and maintain the program.)
On that note, if a Perl script is basically only calling external commands (e.g. custom utilities without libraries), might it be better just to make a shell script of it?
Possibly. You can configure some shells to exit if any program returns an unsuccessful error code. This can make some scripts quite robust. For example, I have a couple of bash scripts featuring the line
trap 'e=$? ; echo "Error." ; exit $e' ERR
"On the other side of the coin, are there ever reasons to favour using system commands instead of the built-in language facilities? On that note, if a Perl script is basically only calling external commands (e.g. custom utilities without libraries), might it be better just to make a shell script of it?"
Risking the wrath of Perl hardliners here. But for me there is an easy reason to use system grep instead of perl grep: I know its syntax.
Same reason to use a Perl script instead of a bash script: I know how to do stuff in Perl and never bothered with bash script syntax.
And as we are talking scripts here, my main concern is getting it done fast and reliable (and readable). At work i do not have to bother with portability as all production is done on the very same system, down to the same software versions of everything for the whole product lifespan.
At home i do not have to care about lifetime or whatever either as the script most likely is single-purpose.
And in neither case i care about performance or software security as i would be using C++ or something else for commercial software or in time or memory limited scenarios.
edit: Not saying these reasons would apply to anyone, or even anyone else. But while in reality i know how to use Perls grep, i really have no idea how to write a bash script and most likely never will. Just putting a few lines in Perl is always faster for me.
Using external tools lead to do more error.
Moreover you have you to parse the results (if any) of the external command, which is an other source of error.
No need to say that it is bad in terms of security.
This question already has answers here:
When should I use semicolons in SQL Server?
(13 answers)
Closed 8 years ago.
Hello People more knowledgeable than me,
I'm taking some online courses for SQL and I am curious about something. With some instructors they draft script and don't seem to be concerned about ending simple commands with a ; however, other instructors seem to religiously add the semicolon at all times.
I'm just wondering, how important is the semicolon, should it be something that is always part of your script or does it not matter?
I know it's a pretty simple question, but the intro classes don't really define exactly why it's needed and since I'm seeing it used differently... I just want to make sure I understand.
Thank you!
Terminating semi-colons will be required in some future version of SQL Server.
Although it's not currently required, it's not a bad habit to get into.
As far as I know, I neglect semi-colons all too much, and my scripts nearly never break. So my best guess is no.
Still makes the code more readable since you do add a layer of seperation in your code.
Oh, you must use them at CTEs though which aren't first in batch
How do I split a long Perl script into two or more different files that can all access the same variables - without having to rename all shared variables from e.g. $count to $::count (or $main::count which is the same)?
In other words, what's the best and simplest way to split the Perl script into several files without having to import a lot of variables/functions and/or do a lot of manual editing.
I assume it has something to do with making the code part of the same package/scope/namespace, but my experiments so far have failed.
I am not sure it makes a difference, but the script is used for web/CGI purposes and will be running under mod_perl.
EDIT - Background:
I kind of knew I would get that response. The reason I want to split up the file is the following:
Currently I have a single very old and very long Perl file. I know it is not following Perl best practices but it works.
The problem is, I need to distribute the data files it uses between different web servers, first of all for performance reasons. There will be one "master" server and one or several "slaves".
About 20% of the mentioned Perl file contains shared functions, 40% has the code need to run on the master server and 40% on the slave servers. Therefore, I would like to split the code into three files: 1. shared, 2. master-only, 3. slave-only. On the master server, 1 and 2 will be loaded, on the slaves, 1 and 3 will be loaded.
I assume this approach would use less process RAM and, more importantly, I would minimize the risk of not splitting the code correctly (e.g. a slave process calling a master data file). I don't see a great need for modularization, as the system works and the code does not need a lot of changes or exchanges with other projects.
EDIT 2 - Solution:
Found the solution I was looking for here:
http://www.perlmonks.org/?node_id=95813
In cases where the main package is in ownership of the variable, the
actual word 'main' can be ommitted to yield something like: $::var
It is possible to get around having to fully qualify variable names
when strict is in use. Applying a simply use vars to your script, with
the variable names as it arguments will get around explicit package
names.
Actually, I ended up repeating the our ($count, etc...) statement for the needed variables instead of use vars ();
Do let me know if I am missing something vital - apart from not going with modules! :)
#Axeman, Thanks, I will accept your answer, both for your effort and for sending me in the right direction.
Unless you put different package statements in their files, they will all be treated as if they had package main; at the top. So assuming that the scripts use package variables, you shouldn't have to do anything. If you have declared them with my (that is, if they are lexically scoped variables) then you would have to make sure that all references to the variables are in the same file.
But splitting scripts up for length is a rotten substitute for modularization. Yes, modularization helps keep code length down, but modularization if the proper way to keep code length down--for all the reasons that you would want to keep code-length down, modularization does it best.
If chopping the files by length could really work for you, then you could create a script like this:
do '/path/to/bin/part1.pl';
do '/path/to/bin/part2.pl';
do '/path/to/bin/part3.pl';
...
But I kind of suspect that if the organization of this code is as bad as you're--sort of--indicating, it might suffer from some of the same re-inventing the wheel that I've seen in Perl-ignorant scripts. Just offhand (I might be wrong) but I'm thinking you would be surprised how much could be chopped from the length by simply substituting better-tested Perl library idioms than for-looping and while-ing everything.