Is there runtime flow chart for Perl? - perl

I am trying to better understand logic and flow of exceptions. So i got to state that i really feeled lack of understanding how Perl interpretes and runs programs, which phases are involved and what happens on every phase.
For example, I'd like to understand, when are binded STD* IO and when released, what is happening with $SIG{*} things, how they are depended with execepions, how program dies, etc. I'd like to have better insight of internals mechanics.
I am looking for links or books. I prefer some material which has also visual charts involved but this is not mandatory. I'd like to see some "big picture" of whole process, then i have already possibilities to dig further if i find it necessary.
I found Chapter 18th in Programming Perl gives overview of compiling phase and i try to work it trough, but i appreciate other good sources too.

Some alternative sources (there are not very many):
Mannning's Extending and Embedding Perl, which is the go-to reference on Perl's internals outside of the source
The chapter on the Perl internals in Advanced Perl Programming, which may be exactly what you want
Simon Cozens's Perl internals FAQ
Those may be more focused to what you're looking for. I'm not sure any of them explicitly spells out the interpreter's runtime execution order, though. The first one is a better "I want to work with this stuff" book; the second two are probably good introductory references.
Some of the questions you ask are not, as far as I know, explicitly documented - the I/O question being one I can't think of a good source for in particular. Exception handling is documented very well in Try::Tiny's documentation, and it's what we use for exceptions. Signal handling is messy, but perlipc documents it pretty well. With threads, you may be stuck with unsafe signals - I generally avoid threads in favor of multiple processes unless I must have shared memory.

You might start with these topics accessible via the perldoc program:
Internals and C Language Interface
perlembed Perl ways to embed perl in your C or C++ application
perldebguts Perl debugging guts and tips
perlxstut Perl XS tutorial
perlxs Perl XS application programming interface
perlxstypemap Perl XS C/Perl type conversion tools
perlclib Internal replacements for standard C library functions
perlguts Perl internal functions for those doing extensions
perlcall Perl calling conventions from C
perlmroapi Perl method resolution plugin interface
perlreapi Perl regular expression plugin interface
perlreguts Perl regular expression engine internals
perlapi Perl API listing (autogenerated)
perlintern Perl internal functions (autogenerated)
perliol C API for Perl's implementation of IO in Layers
perlapio Perl internal IO abstraction interface
perlhack Perl hackers guide
perlsource Guide to the Perl source tree
perlinterp Overview of the Perl interpreter source and how it works
perlhacktut Walk through the creation of a simple C code patch
perlhacktips Tips for Perl core C code hacking
perlpolicy Perl development policies
perlgit Using git with the Perl repository

Related

Perl does not have a simple #include, OK, but WHY?

Perl does not have a C style preprocessor level "include" function. That is how it is, and there are numerous sites that explain how to more or less emulate the same sort of behavior.
The one thing I couldn't find on any of these sites is any explanation for WHY perl does not have this functionality. Given that Perl often provides many different ways to accomplish the same thing, it is a curious omission.
Can somebody please explain why the decision was made to exclude this sort of functionality?
Perl already has require, do, eval and here documents among other things. It doesn't need a builtin preprocessor, if you need one that badly, there are filters. http://perldoc.perl.org/perlfilter.html
In general, nobody wants #include, even C and C++ programmers would mostly be happy to give it up in exchange for:
Faster compiles
Clean module system
#include is legacy, period. If a mainstream language designer announced tomorrow that they were adding #include to (your favorite language here) you'd probably see mass hysteria, laughing, and loss of confidence in that designer.
Language designers don't implement #include in any new language, there are simply better ways to do it. In general the trend is to attempt to achieve single pass lexing. Preprocessing requires you to incrementally expand #includes and potentially revisit the same characters repeatedly. It has been wrought with problems, and is one of the reasons that C++ is such dog to compile. It was ok in the 60s and 70s when memory and CPU were tiny and languages and problems were simpler, as were codebases. Nowadays, you want to be able to compile a "library" once, store its type metadata with it so the compiler can access it efficiently without rescanning it. That is what Microsoft does anyway with precompiled headers.
So what would #include be good for?
Modules ? No. See above. Modules are compiled once, export their metadata efficiently, they don't pollute the namespace of the clients, they don't recursively inject other includes, they can be distributed in binary form, among umpteen other advantages that I'm not even smart enough to think of.
Including macros ? No. Replace with constants, inlining and generic programming. All of which can be precompiled and expored from a module.
Splicing in generated code ? Better ways to do it anyway. See modules.
The only useful functionality for the preprocessor, IMO, is conditional compilation.
#ifdef _WIN32_
// do windowsy stuff
#else
#endif
Again, Perl can do this with do, eval or require as well.
Perl doesn't have or lack it any more than C does.
The C preprocessor was designed such that it and C need to know as little as possible about each other. There is no reason why you can't use it with Perl.
So why don't Perl programmers do it?
As codenhein explains, it's generally a bad idea to use an include mechanism with a compiler that don't know anything about each other, as it leaves you open to some crazy errors that neither can diagnose; the fact that C programmers are used to it doesn't change that.

Using system commands in Perl instead of built in libraries/functions [duplicate]

This question already has an answer here:
Using Perl modules vs. using system() calls
(1 answer)
Closed 9 years ago.
On occasion I see people calling the system grep from Perl (and other scripting languages for that matter) instead of using the built-in language facilities/libraries to parse files. I would like to encourage people to use the built-in facilities and I want to solicit some reasons as to why it is good practice to use the built-in tools. I can think of some such as
Using libraries/language facilities is faster. Performance suffers due to the overhead of executing external commands.
Sticking to language facilities is more portable.
any other reasons?
On the other side of the coin, are there ever reasons to favour using system commands instead of the built-in language facilities? On that note, if a Perl script is basically only calling external commands (e.g. custom utilities without libraries), might it be better just to make a shell script of it?
Actually, when it matters, a specialized tool can be faster.
The real gains of keeping the work in Perl are:
Portability (even between machines with the same OS).
Ease of error detection.
Flexibility in handling of errors.
Greater customizability/flexibility.
Fewer "moving parts". (Are you sure you correctly escaped everything and setup the environment correctly?)
Less expertise needed. (You don't need to know both Perl and the external tools (and their ports) to code and maintain the program.)
On that note, if a Perl script is basically only calling external commands (e.g. custom utilities without libraries), might it be better just to make a shell script of it?
Possibly. You can configure some shells to exit if any program returns an unsuccessful error code. This can make some scripts quite robust. For example, I have a couple of bash scripts featuring the line
trap 'e=$? ; echo "Error." ; exit $e' ERR
"On the other side of the coin, are there ever reasons to favour using system commands instead of the built-in language facilities? On that note, if a Perl script is basically only calling external commands (e.g. custom utilities without libraries), might it be better just to make a shell script of it?"
Risking the wrath of Perl hardliners here. But for me there is an easy reason to use system grep instead of perl grep: I know its syntax.
Same reason to use a Perl script instead of a bash script: I know how to do stuff in Perl and never bothered with bash script syntax.
And as we are talking scripts here, my main concern is getting it done fast and reliable (and readable). At work i do not have to bother with portability as all production is done on the very same system, down to the same software versions of everything for the whole product lifespan.
At home i do not have to care about lifetime or whatever either as the script most likely is single-purpose.
And in neither case i care about performance or software security as i would be using C++ or something else for commercial software or in time or memory limited scenarios.
edit: Not saying these reasons would apply to anyone, or even anyone else. But while in reality i know how to use Perls grep, i really have no idea how to write a bash script and most likely never will. Just putting a few lines in Perl is always faster for me.
Using external tools lead to do more error.
Moreover you have you to parse the results (if any) of the external command, which is an other source of error.
No need to say that it is bad in terms of security.

How can i test Perl code for DRY (Don't Repeat Yourself)

For Python, we could use something like Python Code Clone Detector
But i just could not find anything for Perl.
With reference to DRY, Catalyst mentions that its build on DRY principle. and if it is i would imagine some tool might have been used to verify that claim.
Furthermore does Perl promote DRY or not ? I know for sure it promotes repeat Others by using CPAN.
You probably mean "Perl promotes 'do not repeat others' by providing CPAN", and that is certainly true.
However, DRY is more of a general programming principle (write many specialized, small functions that can be parametrized properly by their arguments instead of writing monolithic functions that "do it all") than a language feature. You can write DRY-compliant code in C++, Python, Perl, Ruby, C and most others. Some languages require more boilerplate, some less.
Perl definitely allows for small functions with few boilerplate by providing concise language constructs.
I don't know of tools detecting non-DRY code for Perl, though.

Why is Perl used so extensively in biology research? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I work as support staff in a biology research institute as a student, and Perl seems to be used everywhere. Not for every single project, but it seems that more than half the people here have a few Perl books in/on their office/desk.
Why is Perl used so much in biology?
Lincoln Stein highlighted some of the saving graces of Perl for bioinformatics in his article:
How Perl Saved the Human Genome Project.
From his analysis:
I think several factors are responsible:
Perl is remarkably good for slicing, dicing, twisting, wringing, smoothing, summarizing and otherwise mangling text. Although the biological sciences do involve a good deal of numeric analysis now, most of the primary data is still text: clone names, annotations, comments, bibliographic references. Even DNA sequences are textlike. Interconverting incompatible data formats is a matter of text mangling combined with some creative guesswork. Perl's powerful regular expression matching and string manipulation operators simplify this job in a way that isn't equalled by any other modern language.
Perl is forgiving. Biological data is often incomplete, fields can be missing, or a field that is expected to be present once occurs several times (because, for example, an experiment was run in duplicate), or the data was entered by hand and doesn't quite fit the expected format. Perl doesn't particularly mind if a value is empty or contains odd characters. Regular expressions can be written to pick up and correct a variety of common errors in data entry. Of course this flexibility can be also be a curse. I talk more about the problems with Perl below.
Perl is component-oriented. Perl encourages people to write their software in small modules, either using Perl library modules or with the classic Unix tool-oriented approach. External programs can easily be incorporated into a Perl script using a pipe, system call or socket. The dynamic loader introduced with Perl5 allows people to extend the Perl language with C routines or to make entire compiled libraries available for the Perl interpreter. An effort is currently under way to gather all the world's collected wisdom about biological data into a set of modules called "bioPerl" (discussed at length in an article to be published later in the Perl Journal).
Perl is easy to write and fast to develop in. The interpreter doesn't require you to declare all your function prototypes and data types in advance, new variables spring into existence as needed, calls to undefined functions only cause an error when the function is needed. The debugger works well with Emacs and allows a comfortable interactive style of development.
Perl is a good prototyping language. Because Perl is quick and dirty, it often makes sense to prototype new algorithms in Perl before moving them to a fast compiled language.
Sometimes it turns out that Perl is fast enough so that of the algorithm doesn't have to be ported; more frequently one can write a small core of the algorithm in C, compile it as a dynamically loaded module or external executable, and leave the rest of the application in Perl (for an example of a complex genome mapping application implemented in this way, see http://waldo.wi.mit.edu/ftp/distribution/software/rhmapper/).
Perl is a good language for Web CGI scripting, and is growing in importance as more labs turn to the Web for publishing their data.
The real answer probably has less to do with Perl than you think. Many of the things that happen are accidents of history. At the time, way back when, Perl was pretty popular, Java was getting more popular, not too many people were paying attention to Python, and Ruby was just getting started.
The people who needed to get work done used Perl and made some libraries in Perl, and other people started using those libraries. Once people start using something that is moderately useful to them, they tend not to switch (economists call those "switching costs"). From there, even more people start using it because a lot of other people are using it.
The same evolution might not happen today. I'd say that Perl, Python, and Ruby are all completely adequate and up to the task. All the things that mobrule quotes from Lincoln Stein could apply to any of the three today. If everyone had to start from scratch today, any one of those languages could be the one that everyone uses.
I've noticed, from my own client base though (a very small and unrepresentative sample of biotech), that the people pushing the programming for a lot of the biological stuff seemed to be at least part-time sysadmins who were supporting scientists. The scientists worried about the science and did some light programming, but the IT support people were doing a lot of the heavy lifting for the non-science parts. Perl is very well positioned as a sysadmin tool since it's the duct-tape of the internet.
Probably because Perl is good at manipulating strings, and much research in genetics involves the manipulation of veeery long "ACTGCATG..." strings. Just guessing...
I use lots of Perl for dealing with qualitative and quantitative data in social science research. In terms of getting things done (largely with text) quickly, finding libraries on CPAN (nice central location), and generally just getting things done quickly, it can't be surpassed.
Perl is also excellent glue, so if you have some instrumental records, and you need to glue them to data analysis routines, then Perl is your language.
Perl seems to be the language of choice for bioinformatics - there's even an O'Reilly title on just this subject: Beginning Perl for Bioinformatics.
Perl is very powerful when it comes to deal with text and it's present in almost every Linux/Unix distribution. In bioinformatics, not only are sequence data very easy to manipulate with Perl, but also most of the bionformatics algorithms will output some kind of text results.
Then, the biggest bioinformatics centers like the EBI had that great guy, Ewan Birney, who was leading the BioPerl project. That library has lots of parsers for every kind of popular bioinformatics algorithms' results, and for manipulating the different sequence formats used in major sequence databases.
Nowadays, however, Perl is not the only language used by bioinformaticians: along with sequence data, labs produce more and more different kinds of data types and other languages are more often used in those areas.
The R statistics programming language for example, is widely used for statistical analysis of microarray and qPCR data (among others). Again, why are we using it so much? Because it has great libraries for that kind of data (see bioconductor project).
Now when it comes to web development, CGI is not really state of the art today, but people who know Perl may stick to it. In my company though it is no longer used...
I hope this helps.
Perl basically forces very short development cycles. That's the kind of development that gets stuff done.
It's enough to outweigh Perl's disadvantages.
Bioinformatics deals primarily in text parsing and Perl is the best programming language for the job as it is made for string parsing. As the O'Reilly book (Beginning Perl for Bioinformatics) says that "With [Perl]s highly developed capacity to detect patterns in data, Perl has become one of the most popular languages for biological data analysis."
This seems to be a pretty comprehensive response. Perhaps one thing missing, however, is that most biologists (until recently, perhaps) don't have much programming experience at all. The learning curve for Perl is much lower than for compiled languages (like C or Java), and yet Perl still provides a ton of features when it comes to text processing. So what if it takes longer to run? Biologists can definitely handle that. Lab experiments routinely take one hour or more finish, so waiting a few extra minutes for that data processing to finish isn't going to kill them!
Just note that I am talking here about biologists that program out of necessity. I understand that there are some very skilled programmers and computer scientists out there that use Perl as well, and these comments may not apply to them.
People missed out DBI, the Perl abstract database interface that makes it really easy to work with bioinformatic databases.
There is also the one-liner angle. You can write something to reformat data in a single line in Perl and just use the -pe flag to embed that at the command line. Many people using AWK and sed moved to Perl. Even in full programs, file I/O is incredibly easy and quick to write, and text transformation is expressive at a high level compared to any engineering language around. People who use Java or even Python for one-off text transformation are just too lazy to learn another language. Java especially has a high dependence on the JVM implementation and its I/O performance.
At least you know how fast or slow Perl will be everywhere, slightly slower than C I/O. Don't learn grep, cut, sed, or AWK; just learn Perl as your command line tool, even if you don't produce large programs with it. Regarding CGI, Perl has plenty of better web frameworks such as Catalyst and Mojolicious, but the mindshare definitely came from CGI and bioinformatics being one of the earliest heavy users of the Internet.
Perl is very easy to learn as compared to other languages. It can fully exploit the biological data which is becoming the big data. It can manipulate big data and perform good for manipulation data curation and all type of DNA programming, automation of biology has become easy due languages like Perl, Python and Ruby. It is very easy for those who are knowing biology, but not knowing how to program that in other programming languages.
Personally, and I know this will date me, but it's because I learned Perl first. I was being asked to take FASTA files and mix with other FASTA files. Perl was the recommended tool when I asked around.
At the time I'd been through a few computer science classes, but I didn't really know programming all that well.
Perl proved fairly easy to learn. Once I'd gotten regular expressions into my head I was parsing and making new FASTA files within a day.
As has been suggested, I was not a programmer. I was a biochemistry graduate working in a lab, and I'd made the mistake of setting up a Linux server where everyone could see me. This was back in the day when that was an all-day project.
Anyway, Perl became my goto for anything I needed to do around the lab. It was awesome, easy to use, super flexible, other Perl guys in other labs we're a lot like me.
So, to cut it short, Perl is easy to learn, flexible and forgiving, and it did what I needed.
Once I really got into bioinformatics I picked up R, Python, and even Java. Perl is not that great at helping to create maintainable code, mostly because it is so flexible. Now I just use the language for the job, but Perl is still one of my favorite languages, like a first kiss or something.
To reiterate, most bioinformatics folks learned coding by just kluging stuff together, and most of the time you're just trying to get an answer for the principal investigator (PI), so you can't spend days on code design. Perl is superb at just getting an answer, it probably won't work a second time, and you will not understand anything in your own code if you see it six months later; BUT if you need something now, then it is a good choice even though I mostly use Python now.
I hope that gives you an answer from someone who lived it.

How can I use Perl to test C programs?

I'm looking for some tutorials showing how I could test C programs by writing Perl programs to automate testing.
Basically I want to learn automation testing with Perl programs. Can anyone kindly share such tutorials or any experiences of yours which can help me kick-start this process?
Perl tests usually use TAP. There are a several C libraries for TAP. Watch this Perl testing presentation.
If you want to start learning how to use Perl to test external programs, start with learning to use Perl to test Perl bits. The Test::More module is a good place to start. Once you understand that, look at all of the other Test::* modules on CPAN to see if one of those modules does the sort of thing you need to do.
If you have a specific question, ask about that. This question is really too broad for anyone to provide a useful answer.
You should look at Perl Testing: A Developer's Notebook by chromatic and Ian Langworth.
I keep meaning to buy a copy but as yet I've just skimmed it at perlmongers meetings. But it seems to be spot on what you're looking for.
UPDATE:
Hm, and this shows I should read the question - testing C programs with Perl, not testing Perl programs with Perl.
The book may still be useful (in that you should probably be writing test scripts and using Test::More and friends) but you will need to write a set of Perl functions to control your C if you take that approach. Basically
sub run_my_c_program {
my #args=#_;
#Set up test environment according to #args
system "my-c-program";
# Turns restults into a $rv data structure
return $rv;
}
and then check $rv in the same way as a normal Perl test:
is_deeply(run_my_c_program(...),
{ .. what I think it returns ..},
".. description of what I'm testing ..");
I don't have a tutorial but I used to be on a test team that tested a C++ compiler. The test harness (home brewed) we used was written in perl and it worked for us for many years. Perl was ideal because we could easily use it to invoke the tools to build the program, capture the output for later insertion into our test logs (using the "backticks" to run the compiler. For example: $compilerOutput = `cl -?`;
), then, if the build was successful, run the test programs and capture their output, again for insertion into our test logs.
Other benefits we got from using Perl:
For pass/fail detection, Perl's regular expression support was helpful.
Our program (the compiler) was ported during the course of its life to many different hardware architectures. Since Perl source was available and in C, we could get perl running on a new architecture as soon as a C compiler was available (which is usually the first language implemented in a new environment after an assembler).
Lots of documentation, books, help available.
-Ron
I wrote an article about Testing C with Libtap that uses Perl's Test::Harness to test C programs. Here's an example project: https://github.com/stig/libggtl/tree/master/t -- might not build any more, as the esoteric build system probably doesn't exist, but you should be able to figure out how it works :-)