Perl aids for regression testing - perl

Is there a Perl module that allows me to view diffs between actual and reference output of programs (or functions)? The test fails if there are differences.
Also, in case there are differences but the output is OK (because the functionality has changed) I want to be able to commit the actual output as future reference output.

Perl has excellent utilities for doing testing. The most commonly used module is probably Test::More, which provides all the infrastructure you're likely to need for writing regression tests. The prove utility provides an easy interface for running test suites and summarizing the results. The Test::Differences module (which can be used with Test::More) might be useful to you as well. It formats differences as side-by-side comparisons. As for committing the actual output as the new reference material, that will depend on how your code under test provides output and how you capture it. It should be easy if you write to files and then compare them. If that's the case you might want to use the Text::Diff module within your test suite.

As mentioned, Test::Differences is one of the standard ways of accomplishing this, but I needed to mention PerlUnit: please do not use this. It's "abandonware" and does not integrate with standard Perl testing tools. Thus, for all new test modules coming out, you would have to port their functionality if you wanted to use them. (If someone has picked up the maintenance of this abandoned module, drop me a line. I need to talk to them as I maintain core testing tools I'd like to help integrate with PerlUnit).
Disclaimer: while Id didn't write it, I currently maintain Test::Differences, so I might be biased.

I tend to use more of the Test::Simple and Test::More functionality. I looked at PerlUnit and it seems to provide much of the functionality which is already built into the standard libraries with the Test::Simple and Test::More libraries.

I question those of you who recommend the use of PerlUnit. It hasn't had a release in 3 years. If you really want xUnit-style testing, have a look at Test::Class, it does the same job, but in a more Perlish way. The fact that it's still maintained and has regular releases doesn't hurt either.
Just make sure that it makes sense for your project. Maybe good old Test::More is all you need (it usually is for me). I recommend reading the "Why you should [not] use Test::Class" sections in the docs.

The community standard workhorses are Test::Simple (for getting started with testing) and Test::More (for once you want more than Test::Simple can do for you). Both are built around the concept of expected versus actual output, and both will show you differences when they occur. The perldoc for these modules will get you on your way.
You might also want to check out the Perl QA wiki, and if you're really interested in perl testing, the perl-qa mailing list might be worth looking into -- though it's generally more about creation of testing systems for Perl than using those systems within the language.
Finally, using the module-starter tool (from Module::Starter) will give you a really nice "CPAN standard" layout for new work -- or for dropping existing code into -- including a readymade test harness setup.

For testing the output of a program, there is Test::Command. It allows to easily verify the stdout and stderr (and the exit value) of programs. E.g.:
use Test::Command tests => 3;
my $echo_test = Test::Command->new( cmd => 'echo out' );
$echo_test->exit_is_num(0, 'exit normally');
$echo_test->stdout_is_eq("out\n", 'echoes out');
$echo_test->stderr_unlike( qr/something went (wrong|bad)/, 'nothing went bad' )
The module also has a functional interface too, if it's more to your liking.

Related

perl - package module alternative to template system

I building a web service, instead of using template system like toolkit I using package module like this:
Create pages urls, each page in independent module according to url previous create in the route,
pass as argument to every module a unique hash ref with variables for global title, footer, and all others data where is the same in each page (module).
main.pl
use strict;
use warnings;
use Handler;
my %mvs = (# my variables
username => $set{user},
titleglobal => '| web System ',
ip => $env->{REMOTE_ADDR}
.........
.........
);
for my $module_url (reverse #all_urls_names ) {
$router->add($module_url, sub {
$module_url->new(\%mvs);
})
}
In the module page, I have anothers modules who load header.pm, footer.pm but the body.pm is loaded directly in the current module page, in this case Handler.pm
Handler.pm
package Handler;
use strict;
use warnings;
use Layout::Head;
use Layout::Footer;
my $layout = sub {
my ($head, $body, $footer) = ( Head::new($mvs), thebody($mvs), Footer::new($mvs) );
return <<THE_HTML;
$head
$body
$footer
THE_HTML
};
return [ 200, [ "Content-Type" => "text/html" ], [ $layout->() ] ];
}
sub thebody{
.........
.........
}
I have done this approach having as reference the wordpress layout, all is working fine and good.
¿is this good way to building maintainable code ?
Note: I chose this way because I not want to install more modules.
(solves the given problem with the least amount of necessary code (less code to debug - obvious speedup)
You say that your constraint is not installing modules. But, what's the on-the-ground difference between installing prewritten modules and creating new ones?
Maybe you have issues with deployment. That's understandable. However, you can use things such as Carton to create applications. Set up everything on a system where you have the flexibility you need to deploy to a system where you don't.
With many CPAN modules you can take the libraries directly from the distribution and reuse them. If they don't use XS or need external libraries (say, like, openssl), they are able to run out of the box.
I don't particularly advise this, but it's doable. You get widely tested modules and the community support that comes with it. That's less code to debug because someone else already did the work! These things are complicated systems and you are going to have to do quite a bit of work to not only debug what you've done but discover everything else that you should have done and supported. Having reinvented a few things myself, I've learned my lesson.
Everyone eventually writes their own templating system (and everyone should as part of their life experience). That's fine. However, you should study what other systems do and how they do it so you don't repeat their mistakes. Some templating modules are small and simple and can be the basis for your explorations. Check out Text::Template for example: it's two module files and no dependencies. Going through this exercise shows you the hidden depths and complexities of what you are trying to do.
If you are making a web framework, have you looked at Mojolicious? It is a self-contained system that only requires core Perl modules (although you'll likely still need other things like a database connection and so on). It has a nice templating system. For something lighter weight (but how much lighter can you get), your approach looks like CGI::Prototype. Take a look at that.
Lastly, avoiding modules because you're anxious about installing anything might be something you just need to confront and get it over with. Almost any system is a bit scary at first and gets better after you get used it and learns how it works. You might not like CPAN (but what does "like" have to do with getting work done?), but perhaps you can get what you need from system packages already. In the end you want to get more work done. A little work at the front can save you a lot of work at the end.
We're here to help when you run into problems installing modules! :)
Is this good way to build maintainable code?
No.
At the heart of your system you have HTML (and, perhaps, CSS and Javascript) embedded in your Perl code. This might look like a good idea when it's just you maintaining the site, but when you get successful enough to need a separate front-end development team, you'll realise what a terrible idea it is.
Also, you are reinventing wheels. There are many great web frameworks and templating systems available on CPAN. Most of them have been used in production by lots of people over many years. They will have more features than your code and will be far better tested.
You say you're doing this because you don't want to install more modules. I urge you to reconsider this approach. Most modern Perl programming consists of plumbing together the right CPAN modules. You will be needlessly restricting the power of the language.

How can I tell if a Perl module is actually used in my program?

I have been on a "cleaning spree" lately at work, doing a lot of touch-up stuff that should have been done awhile ago. One thing I have been doing is deleted modules that were imported into files and never used, or they were used at one point but not anymore. To do this I have just been deleting an import and running the program's test file. Which gets really, really tedious.
Is there any programmatic way of doing this? Short of me writing a program myself to do it.
Short answer, you can't.
Longer possibly more useful answer, you won't find a general purpose tool that will tell you with 100% certainty whether the module you're purging will actually be used. But you may be able to build a special purpose tool to help you with the manual search that you're currently doing on your codebase. Maybe try a wrapper around your test suite that removes the use statements for you and ignores any error messages except messages that say Undefined subroutine &__PACKAGE__::foo and other messages that occur when accessing missing features of any module. The wrapper could then automatically perform a dumb source scan on the codebase of the module being purged to see if the missing subroutine foo (or other feature) might be defined in the unwanted module.
You can supplement this with Devel::Cover to determine which parts of your code don't have tests so you can manually inspect those areas and maybe get insight into whether they are using code from the module you're trying to purge.
Due to the halting problem you can't statically determine whether any program, of sufficient complexity, will exit or not. This applies to your problem because the "last" instruction of your program might be the one that uses the module you're purging. And since it is impossible to determine what the last instruction is, or if it will ever be executed, it is impossible to statically determine if that module will be used. Further, in a dynamic language, which can extend the program during it's run, analysis of the source or even the post-compile symbol tables would only tell you what was calling the unwanted module just before run-time (whatever that means).
Because of this you won't find a general purpose tool that works for all programs. However, if you are positive that your code doesn't use certain run-time features of Perl you might be able to write a tool suited to your program that can determine if code from the module you're purging will actually be executed.
You might create alternative versions of the modules in question, which have only an AUTOLOAD method (and import, see comment) in it. Make this AUTOLOAD method croak on use. Put this module first into the include path.
You might refine this method by making AUTOLOAD only log the usage and then load the real module and forward the original function call. You could also have a subroutine first in #INC which creates the fake module on the fly if necessary.
Of course you need a good test coverage to detect even rare uses.
This concept is definitely not perfect, but it might work with lots of modules and simplify the testing.

Using Inline::CPP vs SWIG - when?

In this question i saw two different answers how to directly call functions written in C++
Inline::CPP (and here are more, like Inline::C, Inline::Lua, etc..)
SWIG
Handmade (as daxim told - majority of modules are handwritten)
I just browsed nearly all questions in SO tagged [perl][swig] for finding answer for the next questions:
What are the main differences using (choosing between) SWIG and Inline::CPP or Handwritten?
When is the "good practice" - recommented to use Inline::CPP (or Inline:C) and when is recommented to use SWIG or Handwritten?
As I thinking about it, using SWIG is more universal for other uses, like asked in this question and Inline::CPP is perl-specific. But, from the perl's point of view, is here some (any) significant difference?
I haven't used SWIG, so I cannot speak directly to it. But I'm pretty familiar with Inline::CPP.
If you would like to compose C++ code that gets compiled and becomes callable from within Perl, Inline::CPP facilitates this. So long as the C++ code doesn't change, it should only compile once. If you base a module on Inline::CPP, the code will be compiled at module install time, so another user never really sees the first time compilation lag; it happens at install time, just before the testing phase.
Inline::CPP is not 100% free of portability isues. The target user must have a C++ compiler that is of similar flavor to the C compiler used to build Perl, and the C++ standard libraries should be of versions that produce binary-compatible code with Perl. Inline::CPP has about a 94% success rate with the CPAN testers. And those last 6% almost always boil down to issues of the installation process not correctly deciphering what C++ compiler and libraries to use. ...and of those, it usually comes down to the libraries.
Let's assume you as a module author find yourself in that 95% who have no problem getting Inline::CPP installed. If you know that your target audience will fall into that same category, then producing a module based on Inline::CPP is simple. You basically have to add a couple of directives (VERSION and NAME), and swap out your Makefile.PL's ExtUtils::MakeMaker call to Inline::MakeMaker (it will invoke ExtUtils::MakeMaker). You might also want a CONFIGURE_REQUIRES directive to specify a current version of ExtUtils::MakeMaker when you create your distribution; this insures that your users have a cleaner install experience.
Now if you're creating the module for general consumption and have no idea whether your target user will fit that 94% majority who can use Inline::CPP, you might be better off removing the Inline::CPP dependency. You might want to do this just to minimize the dependency chain anyway; it's nicer for your users. In that case, compose your code to work with Inline::CPP, and then use InlineX::CPP2XS to convert it to a plain old XS module. Your user will now be able to install without the process pulling Inline::CPP in first.
C++ is a large language, and Inline::CPP handles a large subset of it. Pay attention to the typemap file to determine what sorts of parameters can be passed (and converted) automatically, and what sorts are better dealt with using "guts and API" calls. One feature I wouldn't recommend using is automatic string conversion, as it would produce Unicode-unfriendly conversions. Better to handle strings explicitly through API calls.
The portion of C++ that isn't handled gracefully by Inline::CPP is template metaprogramming. You're free to use templates in your code, and free to use the STL. However, you cannot simply pass STL type parameters and hope that Inline::CPP will know how to convert them. It deals with POD (basic data types), not STL stuff. Furthermore, if you compose a template-based function or object method, the C++ compiler won't know what context Perl plans to call the function in, so it won't know what type to apply to the template at compiletime. Consequently, the functions and object methods exposed directly to Inline::CPP need to be plain functions or methods; not template functions or classes.
These limitations in practice aren't hard to deal with as long as you know what to expect. If you want to expose a template class directly to Inline::CPP, just write a wrapper class that either inherits or composes itself of the template class, but gives it a concrete type for Inline::CPP to work with.
Inline::CPP is also useful in automatically generating function wrappers for existing C++ libraries. The documentation explains how to do that.
One of the advantages to Inline::CPP over Swig is that if you already have some experience with perlguts, perlapi, and perlcall, you will feel right at home already. With Swig, you'll have to learn the Swig way of doing things first, and then figure out how to apply that to Perl, and possibly, how to do it in a way that is CPAN-distributable.
Another advantage of using Inline::CPP is that it is a somewhat familiar tool in the Perl community. You are going to find a lot more people who understand Perl XS, Inline::C, and to some extent Inline::CPP than you will find people who have used Swig with Perl. Although XS can be messy, it's a road more heavily travelled than using Perl with Swig.
Inline::CPP is also a common topic on the inline#perl.org mailing list. In addition to myself, the maintainer of Inline::C and several other Inline-family maintainers frequent the list, and do our best to assist people who need a hand getting going with the Inline family of modules.
You might also find my Perl Mongers talk on Inline::CPP useful in exploring how it might work for you. Additionally, Math::Prime::FastSieve stands as a proof-of-concept for basing a module on Inline::CPP (with an Inline::CPP dependency). Furthermore, Rob (sisyphus), the current Inline maintainer, and author of InlineX::CPP2XS has actually included an example in the InlineX::CPP2XS distribution that takes my Math::Prime::FastSieve and converts it to plain XS code using his InlineX::CPP2XS.
You should probably also give ExtUtils::XSpp a look. I think it requires you to declare a bit more stuff than Inline::CPP or SWIG, but it's rather powerful.

What is the preferred unit testing framework for Perl?

I'm sort of new to Perl and I'm wondering if there a prefered unit testing framework?
Google is showing me some nice results, but since I'm new to this, I don't know if there is a clear preference within the community.
Perl has a MASSIVE set of great testing tools that come with it! The Perl core has several tens of thousands of automated checks for it, and for the most part they all use use these standard Perl frameworks. They're all tied together using TAP - the Test Anything Protocol.
The standard way of creating TAP tests in Perl is using the Test::More family of packages, including Test::Simple for getting started. Here's a quick example:
use 5.012;
use warnings;
use Test::More tests => 3;
my $foo = 5;
my $bar = 6;
ok $foo == 5, 'Foo was assigned 5.';
ok $bar == 6, 'Bar was assigned 6.';
ok $foo + $bar == 11, 'Addition works correctly.';
And the output would be:
ok 1 - Foo was assigned 5.
ok 2 - Bar was assigned 6.
ok 3 - Addition works correctly.
Essentially, to get started, all you need to do is put pass a boolean value and a string explaining what should occur!
Once you get past that step, Test::More has a large number of other functions to make testing other things easier (string, regex compares, deep structure compares) and there's the Test::Harness back end that will let you test large groups of individual test scripts together.
On top of that, as Schwern pointed out, almost all of the modern Test:: modules work together. That means you can use Test::Class (as pointed out by Markus) with all of the great modules listed in rjh's answer. In fact, because Test::Builder--the tool that Test::More and others are built on (and currently maintained by Schwern...thanks Schwern!)--you can, if needed, build your OWN test subroutines from the ground up that will work with all the other test frameworks. That alone makes Perl's TAP system one of the nicest out there in my opinion: everything works together, everyone uses the same tool, and you can add on to the framework to suit your needs with very little additional work.
Perl's most popular test 'framework' is a test results format known as TAP (Test Anything Protocol) which is a set of strings that look like:
ok 1 - Imported correctly
ok 2 - foo() takes two arguments
not ok 3 - foo() throws an error if passed no arguments
Any script that can generate these strings counts as a Perl test. You can use Test::More to generate TAP for various conditions - checking if a variable is equal to a value, checking if a module imported correctly, or if two structures (arrays/hashes) are identical. But in true Perl spirit, there's more than one way to do it, and there are other approaches (e.g. Test::Class, which looks a bit like JUnit!)
A simple example of a test script (they usually end in .t, e.g. foo.t)
use strict;
use warnings;
use Test::More tests => 3; # Tell Test::More you intend to do 3 tests
my $foo = 3;
ok(defined $foo, 'foo is defined');
is($foo, 3, 'foo is 3');
$foo++;
is($foo, 4, 'incremented foo');
You can use Test::Harness (commonly invoked as prove from the shell) to run a series of tests in sequence, and get a summary of which ones passed or failed.
Test::More can also do some more complex stuff, like mark tests as TODO (don't expect them to pass, but run them just in case) or SKIP (these tests are broken/optional, don't run them). You can declare the number of tests you expect to run, so if your test script dies half-way, this can be detected.
Once you begin to do more complex testing, you might find some other CPAN modules useful - here are a few examples, but there are many (many) more:
Test::Exception - test that your code throws an error/doesn't throw any errors
Test::Warn - test that your code does/doesn't generate warnings
Test::Deep - deeply compare objects. They don't have to be identical - you can ignore array ordering, use regexes, ignore classes of objects etc.
Test::Pod - make sure your script has POD (documentation), and that it is valid
Test::Pod::Coverage - make sure that your POD documents all the methods/functions in your modules
Test::DBUnit - test database interactions
Test::MockObject - make pretend objects to control the environment of your tests
Definitely start with this page: http://perldoc.perl.org/Test/Simple.html and follow the reference to Test::Tutorial.
If you practice TDD, you will notice that your set of unit tests are changing A LOT. Test::Class follows the xUnit patterns (http://en.wikipedia.org/wiki/XUnit).
For me, the main benefit with xUnit is the encapsulation of each test in methods. The framework names each assertion by the name of the test method, and adds the possibility to run setup- and teardown methods before and after each test.
I have tried the "perl-ish" way for unit testing also (just using Test::More), but I find it kind of old-fashioned and cumbersome.
Some anti-recommendations may be in order:
Anti-recommendation:
Do NOT use the Test::Unit family of test packages for Perl, such as Test::Unit::Assert and Test::Unit::TestCases.
Reason: Test::Unit appears to be abandoned.
Test::Unit, Test::Unit::TestCases, Test::Unit::Assert work, pretty well (when I used them 2015-2016). Test::Unit is supposedly not integrated with Perl's Test Anything Protocol (TAP), although I found that easy to fix.
But Test::Unit is frustrating because so many of the other Perl test packages, mostly built using Test::Builder, like Test::More, Test::Most, Test::Exception, Test::Differences, Test::Deep, Test::Warn, etc., do NOT interact well with the object oriented testing approach of Test::Unit.
You can mix Test::Unit tests and Test::Builder tests once you have adapted Test::Unit to work with Test::More and the TAP; but the good features of these other packages are not available for OO extension. Which is much of the reason to use an xUnit-style test anyway.
Supposedly CPAN's Test::Class allows to "Easily create test classes in an xUnit/JUnit style" -- but I am not sure that I can recommend this. It certainly doesn't look like xUnit to me - not OO, idiosyncratic names like is(VAL1,VAL2,TESTNAME) instead of xUnit style names like $test_object->assert_equals(VAL1,VAL2,TEST_ERR_MSG). Test::Class does have the pleasant feature of auto-detecting all tests annotated :Test, comparable to xUnit and TEST::Unit::TestCase's approach of using introspection to run all functions named test_*.
However, the underlying package Test::Builder is object oriented, and hence much more xUnit style. Don't be scared away by the name - it's not a factory, it's mostly a suite with test assert methods. Although most people inherit from it, you can call it directly if you wish, e.g. $test_object->is(VAL1,VAL2,TESTNAME), and often you can use Test::Builder calls to work around the limitations of procedural packages like Test::More that are built on top of Test::Builder - like fixing the callstack level at which an error is reported.
Test::Builder is usually used singleton style, but you can create multiple objects. I am unsure as to whether these behave as one would expect from an xUnit family test.
So far, no easy way to work around limitations such as Perl TAP tests use TEST_NAMES, per assert, without hierarchy, and without distinguishing TEST_NAMES from TEST_ERROR_MESSAGES. (Err reporting level helps with that lack.)
It may be possible to create an adapter that makes Test::Builder and TAP style tests more object oriented, so that you can rebase on something other than TAP (that records more useful info than TAP - supposedly like ANT's XML protocol). I think to adapt the names and/or the missing concepts will either involve going into Test::Builder, or introspection.

What are the popular, contemporary uses for Perl?

What are the popular, contemporary uses for Perl?
Edit
I should have been more specific. I was wondering more on the large scale (popular) what people are using Perl for rather than what it could be used for on the individual level.
As a glue language, system administrators' language, and now, it is back to taking-over-the-internet using Catalyst.
At my University Perl is widely used for Bioinformatic tasks. Automatic changing the format of a Proteindata file, checking with a database transforming the results back and so on.
So its mostly changing file formats, regular expressions, and parsing of huge datasets
The same as ever: Making the impossible, possible. ;-)
Along with Python, the system administrators in my company love it for driving automation tasks. "If something is worth doing, it's worth automating" seems to be a mantra, and if they can do it in five lines, all the better.
The problem with this question, is that Perl is a very versatile language. Between code golf and it's similarity to awk/sed, it is still widely used as a glue language and quick go-to language for sysadmin tasks.
With CPAN, lots of very useful and more advanced things can be written quickly.
It interfaces well with databases and there are tons of frameworks for web design. It works quite well with Ajax, as I've noticed through my own use of it.
Get into best practices, and you've got a system that is quite good at doing very large programming tasks. Heck, the whole of cpan is a testament to Perl's reusability and encapsulation.
See skills that are being sought by employers at http://jobs.perl.org/.
Somewhat confused by the question. For coding.
I think it would be better framed as: What isn't Perl used for? Which I'd answer with: Writing device drivers, anyone got any more?
It's used for gui apps (See Padre), Internet apps (Catalyst), other networking/sockets (POE), accessing databases (DBI), Cryptology (Crypt namespace), Web services (SOAP), Handling binary formats (pack/unpack)...
And of course all manner of text processing.
And that's just the stuff I've used it for.. recently.
Amazon and IMDB uses Perl, more specifically Mason, IIANM.
I currently am using Perl to write an automated testing suite for my company's web sites (using WWW::Mechanize and WWW::Selenium). One of my co-workers is doing the same for other types of servers. We also use it for our monitoring software (Nagios). And I use perl daily as a commandline tool to aid in basic sysadminy tasks.
I wrote a short, simple script to parse some data out of a log file recently. I find it pretty easy and useful for quick scripting tasks.
Try running this with the terminal size set to at least 120x50 and you will be enlightened ;).
#
sub j(\$){($
P,$V)= #_;while($$P=~s:^
([()])::x){ $V+=('('eq$1)?-32:31
}$V+=ord( substr( $$P,0,1,""))-74} sub a{
my($I,$K,$ J,$L)=#_ ;$I=int($I*$M/$Z);$K=int(
$K*$M/$Z);$J=int($J*$M /$Z);$L=int($L*$M/$Z); $G=$
J-$I;$F=$L-$K;$E=(abs($ G)>=abs($F))?$G:$F;($E<0) and($
I,$K)=($J,$L);$E||=.01 ;for($i=0;$i<=abs$E;$i++ ){ $D->{$K
+int($i*$F/$E) }->{$I+int($i*$G/$E)}=1}}sub p{$D={};$
Z=$z||.01;map{ $H=$_;$I=$N=j$H;$K=$O=j$H;while($H){$q=ord
substr($H,0,1,"" );if(42==$q){$J=j$H;$L=j$H}else{$q-=43;$L =$q
%9;$J=($q-$L)/9;$L=$q-9*$J-4;$J-=4}$J+=$I;$L+=$K;a($I,$K,$J,$ L);
($I,$K)=($J,$L)}a($I,$K,$N,$O)}#_;my$T;map{$y=$_;map{ $T.=$D->{$y}
->{$_}?$\:' '}(-59..59);$T.="\n"}(-23..23);print"\e[H$T"}$w= eval{
require Win32::Console::ANSI};$b=$w?'1;7;':"";($j,$u,$s,$t,$a,$n,$o
,$h,$c,$k,$p,$e,$r,$l,$C)=split/}/,'Tw*JSK8IAg*PJ[*J#wR}*JR]*QJ[*J'.
'BA*JQK8I*JC}KUz]BAIJT]*QJ[R?-R[e]\RI'.'}Tn*JQ]wRAI*JDnR8QAU}wT8KT'.
']n*JEI*EJR*QJ]*JR*DJ#IQ[}*JSe*JD[n]*JPe*'.'JBI/KI}T8#?PcdnfgVCBRcP'.
'?ABKV]]}*JWe*JD[n]*JPe*JC?8B*JE};Vq*OJQ/IP['.'wQ}*JWeOe{n*EERk8;'.
'J*JC}/U*OJd[OI#*BJ*JXn*J>w]U}CWq*OJc8KJ?O[e]U/T*QJP?}*JSe*JCnTe'.
'QIAKJR}*JV]wRAI*J?}T]*RJcJI[\]3;U]Uq*PM[wV]W]WCT*DM*SJ'. 'ZP[Z'.
'PZa[\]UKVgogK9K*QJ[\]n[RI#*EH#IddR[Q[]T]T]T3o[dk*JE'. '[Z\U'.
'{T]*JPKTKK]*OJ[QIO[PIQIO[[gUKU\k*JE+J+J5R5AI*EJ00'. 'BCB*'.
'DMKKJIR[Q+*EJ0*EK';sub h{$\ = qw(% & # x)[int rand
4];map{printf "\e[$b;%dm",int(rand 6)+101-60* ($w
||0);system( "cls")if$w ;($A,$S)= ($_[1], $
_[0]);($M, #,)= split '}';for( $z=256
;$z>0; $z -=$S){$S*= $A;p #,} sleep$_
[2];while ($_[3]&&($ z+=$ S) <=256){
p#,}}("". "32}7D$j" ."}AG". "$u}OG"
."$s}WG" ."$t","" ."24}(" ."IJ$a"
."}1G$n" ."}CO$o" ."}GG$t" ."}QC"
."$h}" ."^G$e" ."})IG" ."$r",
"32}?" ."H$p}FG$e}QG$r". "}ZC"
."$l", "28}(LC" ."" ."".
"$h}:" ."J$a}EG". "$c"
."}M" ."C$k}ZG". "$e"
."}" ."dG$r","18" ."}("
."D;" ."$C" )}{h(16 ,1,1,0
);h(8, .98,0,0 );h(16 ,1,1,1)
;h(8.0 ,0.98,0, 1); redo}###
#written 060204 by
#liverpole #######
############
You can find out quite a bit about what people are currently doing with Perl by taking a look at the posts submitted to the Enlightened Perl Iron Man Challenge.
Personally, I'm currently using it to build the site for (yet another) AJAX-enabled, Twitterfied, etc., etc. social networking startup.
Web sites, data processing/extraction, system administration, task automation, even GUI programming. Mathematics, bioinformatics, chemistry, geology programs.
At my company we used to use Perl to run hundreds of RegEx's to transform random publisher files into SGML to make electronic books. Alas, those days are over now that we've updated our systems to XML books.
I use Perl for what it has been designed: a Practical way for Extracting useful information from raw data and presenting them in human-readable Reports. This is a very nice Language for this task.