Teaching Programming Best Practices to Perl Developers - perl

I have been delivering training on Programming Practices and on Writing Quality Code to participants who have been working on Java since sometime. Object Oriented Analysis and Design is the base and I cover S.O.L.I.D. Principles and excerpts from books like Clean Code, Code Complete 2 and so on.
I am scheduled to deliver training to Perl Programmers(with less than 1 yr. exp. in Perl) in two days and they do not use the Moose(an extension of the Perl 5 object system which brings modern object-oriented language features).
I am now confused as to how to structure my training as they don't follow OOPs.
Any suggestions?
Regards,
Shardul.

Even without Moose, object-oriented programming in Perl is quite possible, and very common. Many CPAN modules offer their functionality through an object-oriented API, even if many of these also offer a non–object-oriented API. (A good example of this duality is IO::Compress::Zip.) Obviously the norms of object-oriented design in Perl are somewhat different from those in some languages — encapsulation is not enforced by the language, for example — but the overall principles and practices are the same.
And even without any sort of object-oriented programming, Moosish or otherwise, there's plenty to talk about in terms of laying out packages, organizing code into functions/subroutines/modules, structuring data, taking advantage of use warnings (or -w) and use strict and -T and CPAN modules, and so on.
I'd also recommend Mark Jason Dominus's book Higher-Order Perl, which he has made available for free download. I don't know to what extent you can race through the whole book in a day and put together something useful in time for your presentation — functional programming is a bit of a paradigm-shift for someone who's not used to it (be it you, or the programmers you're presenting to!) — but you may find some useful things in there that you can use.

A lot of the answers here are answers about teaching OOP to Perl programmers who don't use it, but your question sounds like you're stymied on how to teach a course on code quality, in light of the fact that your Perl programmers do not use OOP, not specifically that you want to teach OOP to non-OO programmers and force them into that paradigm.
That leaves us with two other paradigms of programming which Perl supports well enough:
Good ol' fashioned Structured Programming also Modular Programming
Functional programming support in Perl (also Higher-Order Perl)
I use both of these--combined with a healthy dose of objects, as well. So, I use objects for the same reason that I use good structure and modules and functional pipelines. Using the tool that brings order and sanity to the programming process. For example, object-oriented programming is the main form of polymorphism--but OOP is not polymorphism itself. Thus if you are writing idioms that assist in polymorphism, they assist in polymorphism, they don't have to be stuck in some ad-hoc library "class" and called like UtilClass->meta_operator( $object ) which has little polymorphism itself.
Moose is a great object language, but you don't call Moose->has( attribute => is => 'rw', isa => 'object' ). You call the operator has. The power of Moose lies in a library of objects that encapsulate the meta-operations on classes--but also in simple expressive operators that the rather open syntax of Perl allows. I would call that the appreciation of solving the problems that OOP solves with objects.
Also, I guess I have a problem with your problem, because "not OOP" is a big field. It can range from everything-in-the-mainline coding to not-strictly-OOP (where the process of programming is not simply OOP analysis). So I think you have to know your audience and know what it is they use to keep that code structured and sane. I can't imagine a modern Perl audience that isn't at least object-users.
From there, Perl Best Practices (often abbreviated PBP) can help you. But so would learning that
simply because OOP is one of the best supports for polymorphism it isn't polymorphism in itself
simply because OOP is one of the best supports for encapsulation it isn't encapsulation in itself.
That OOP has been assisted by structured and modular programming--and is not by itself those things. Some of its power is simply just those disciplines.
In addition, as big as an object author and consumer I am, OOP is not the way I think. Reusability is the way I think: What have I done before that I do not want to write again? What have I written that is similar? How can I make my current task just an adapter of what has been written before. (And often: how can I sneak my behavior branch an established module in a single line?)
As a result, a number of my constructs would fail the pedestrian goal of OOP. To give you a better view: I divide code into two "domains": Highly abstract and polymorphic Library code, and the Scripting that I need to do to get the particular function that I'm required in a current project. (this is essentially what "application" means, but I don't think it would be as clear). As a result, polymorphism is mainly instrumental in providing adaptability, but the adaptation itself is whatever takes the least lines of code. My optimum system would be a library that allows scripting/adaptation at any juncture between library behavior and a set of configurations or scripts that address a particular problem. Again, if I had my druthers, configuration would be injected from the scripted domain and no library code would say "I need a properties file" by itself, unless it was a library module encapsulating the algorithm of configuration instanced in properties files. It would just know that it needs "policies" (or decisions from the application domain) in order to fulfil its function.
Thus, my ideal application contains special purpose "objects", which conform to "roles" but where classes are useless overhead--except that the classes perform the behavior which allow injectable data and behavior. So some of my Perl "objects" violate OOP analysis, because they are simply encapsulations of one-off solutions, kind of like the push-pin (expando) JavaScript objects.
I will often (later) revise a special-purpose object and push it further back into the library domain as I find that I need to write something like this again. All objects in the library domain are just on some level of the spectrum of specified behavior. Also, I arrange "data networks" where there is a Sourced type of class that simply encapsulates the behavior of accessing data either in the object itself or another source object. This helps speed my solutions immensely, but I've never seen it addressed in any duck-cat-dog-car-truck OOP primer. Also templating--especially when combined with "data networks"--immensely useful in coding solutions in a half-dozen lines or a half-day of work.
So I guess I'm saying, to the extent that you only know OOP for structuring programming, you won't be able to appreciate how much some older, sound practices or other paradigms do for you--or how things that qualify as OOP can promote mediocre adaptability. (Besides components are far more current than "objects".) Encapsulation solves many problems, but it also promotes the lack of data where you need it. The idea is to get data where you need it so that your canned behavior can realize the specifics of the problem and operate on that.
Reread some stuff on structured programming
Read some stuff on functional programming (assuming that you're not already familiar with it.)
Also it's possible that even an established, "productive" Perl team is writing ... crap. If they are not OOP programmers because they are simply writing crap code, then by all means teach them OOP and if they lack even structured programming *shove both of them down their throats* (I have a hard time considering the label "professional", here).

Take a good look at 'Perl Best Practices' by Damian Conway. It has lots of solid material in it, and you won't go far wrong taking his advice.
Be aware, though, that Getopt::Clade is only available as a placeholder package - it is vapourware, in other words.

You might want to look at what's covered in the "Modern Perl" book too:
http://onyxneon.com/books/modern_perl/
As the others say - plenty to cover without Moose.
Setting up modules/distros
Testing and TAP
Deployment with cpanm / cpan / local::lib
Important changes 5.8 5.10 vs 5.12 vs 5.14, autodie etc.

Perl programmers must know about Perl's weakly functional features, like list contexts, map, grep, etc. A little functional style makes Perl infinitely more readable.
Perl programmers must also understand Perl's traditional OO features, especially modules, bless, and tie. Make them write an object or maybe tie a Cache::Memcached object around a query or something.

Related

Why use Perl OO simply as a data encapsulation technique?

I am trying to use a Perl API that has been written to use the Moose OO system but there is absolutely no inheritance, aggregation, or composition involved between the objects.
And, apart from a single optional role for debugging, there are no roles or mixins involved as well.
As far as I can see at the moment, using Moose just seems to add a massive amount of complication and compile-time overhead for very little benefit.
Why would you use Moose, or OO, as a simple method of encapsulation instead of using the far simpler technique of packaging the code into Perl modules?
Just to clarify, I am totally convinced of the many advantages of using Moose to do OO in Perl correctly and completely. I just don't understand why use OO at all as a simple encapsulation technique? I am not after a subjective argument in favour or otherwise of Perl OO. I am hoping that I am missing some advantage to using the OO paradigm here that I am simply not seeing atm.
This question has an excellent series of points about data encapsulation in Perl. N.B. I am not talking about enforced encapsulation where the system stops you from looking where you shouldn't, more about just only exposing methods in a package that manipulate the data you want to play with.
Is there some advantage to using OO here that I am missing?
Edit 1: After a bit of detective work, I have just seen that the author of the Perl API is also heavily involved in the maintenance and support of the Moose framework.
Edit 2: I have just seen this question which asks a similar thing from a slightly different angle. It's seems like my answer is actually "why would you want to add the complication in the first place?", especially given the info in edit 2 above.
POSSIBLE ANSWER
The API in question seems to be only using the Moose OO system in order to prevent namespace pollution.
This could also be done more than adequately by using Perl packages though, as #amon point out in the comment below, using the standard package mechanism can become cumbersome quite quickly. BTW A big thanks to for all the comments to help clarify what I was asking.
Every situation is diffferent, and whether you choose to use Moose or another object framework (or none at all) really comes down to what you're planning to do and what your requirements are.
There might not be any immediate advantage to writing the system in question with Moose, but consider these:
You get free access to Moose's metaobject system, so you can interrogate the objects for useful information in a well-defined way
You can extend the provided classes using Moose's inheritance system; so even if they don't use inheritance themselves, the framework is already in place for you to do so if needed
You have peace-of-mind because you know the system was built on the most widely-deployed and well-tested object framework for Perl.
People know Moose, meaning there is a higher probability of getting answers to questions if something breaks.
IOW, there is inherent value in using popular tools.
Not sure it's relevant to the API in question, but no-one seems to have mentioned data types yet - that's a big benefit of Moose or Moo, having easily defined and understood (and re-usable) type validation and coercion for attributes.

Moose OOP or Standard Perl?

I'm going into writing some crawlers for a web-site, the idea is that the site will use some back-end Perl scripts to fetch data from other sites, my design (in a very abstract way..) will be to write a package, lets say:
package MyApp::Crawler::SiteName
where site name will be a module / package for crawling specific sites, I will obviously will have other packages that will be shared across different modules, but that not relevant here.
anyway, making long short, my question is: Why (or why not...) should I prefer Moose over Standard OO Perl?
While I disagree with Flimzy's introduction ("I've not used Moose, but I have used this thing that uses Moose"), I agree with his premise.
Use what you feel you can produce the best results with. If the (or a) goal is to learn how to effectively use Moose then use Moose. If the goal is to produce good code and learning Moose would be a distraction to that, then don't use Moose.
Your question however is open-ended (as others have pointed out). There is no answer that will be universally accepted as true, otherwise Moose's adoption rate would be much higher and I wouldn't be answering this. I can really only explain why I choose to use Moose every time I start a new project.
As Sid quotes from the Moose documentation. Moose's core goal is to be a cleaner, standardized way of doing what Object Oriented Perl programmers have been doing since Perl 5.0 was released. It provides shortcuts to make doing the right thing simpler than doing the wrong thing. Something that is, in my opinion, lacking in standard Perl. It provides new tools to make abstracting your problem into smaller more easily solved problems simpler, and it provides a robust introspection and meta-programming API that tries to normalize the beastiary that is hacking Perl internals from Perl space (ie what I used to refer to as Symbol Table Witchery).
I've found that my natural sense of how much code is "too much" has been reduced by 66% since I started using Moose[^1]. I've found that I more easily follow good design principles such as encapsulation and information hiding, since Moose provides tools to make it easier than not. Because Moose automatically generates much of the boiler-plate that I normally would have to write out (such as accessor methods, delegate methods, and other such things) I find that it's easier to quickly get up to speed on what I was doing six months ago. I also find myself writing far less tricky code just to save myself a few keystrokes than I would have a few years ago.
It is possible to write clean, robust, elegant Object Oriented Perl that doesn't use Moose[^2]. In my experience it requires more effort and self control. I've found that in those occasions where the project demands I can't use Moose, my regular Object Oriented code has benefitted from the habits I have picked up from Moose. I think about it the same way I would think about writing it with Moose and then type three times as much code as I write down what I expect Moose would generate for me[^3].
So I use Moose because I find it makes me better programmer, and because of it I write better programs. If you don't find this to be true for you too, then Moose isn't the right answer.
[^1]: I used to start to think about my design when I reached ~300 lines of code in a module. Now I start getting uneasy at ~100 lines.
[^2]: Miyagawa's code in Twiggy is an excellent example that I was reading just today.
[^3]: This isn't universally true. There are several stories going around about people who write less maintainable, horrific code by going overboard with the tools that Moose provides. Bad programmers can write bad code anywhere.
You find the answer why to use Moose in the Documentation of it.
The main goal of Moose is to make Perl 5 Object Oriented programming easier, more consistent, and less tedious. With Moose you can think more about what you want to do and less about the mechanics of OOP.
From my experience and probably others will tell you the same. Moose reduces your code size extremly, it has a lot of features and just standard features like validation, forcing values on creation of a object, lazy validation, default values etc. are just so easy and readable that you will never want to miss Moose.
Use Moose. This is from something I wrote last night (using Mouse in this case). It should be pretty obvious what it does, what it validates, and what it sets up. Now imagine writing the equivalent raw OO. It's not super hard but it does start to get much harder to read and not just the code proper but the intent which can be the most important part when reading code you haven't seen before or in awhile.
has "io" =>
is => "ro",
isa => "FileHandle",
required => 1,
handles => [qw( sysread )],
trigger => sub { binmode +shift->{io}, ":bytes" },
;
I wrote a big test class last year that also used the handles functionality to redispatch a ton of methods to the underlying Selenium/WWWMech object. Disappearing this sort of boilerplate can really help readability and maintenance.
I've never used Moose, but I've used Catalyst, and have extensive experience with OO Perl and non-OO Perl. And my experience tells me that the best method to use is the method you're most comfortable using.
For me, that method has become "anything except Catalyst" :) (That's not to say that people who love and swear by Catalyst are wrong--it's just my taste).
If you already have the core of a crawler app that you can build on, use whatever it's written in. If you're starting from scratch, use whatever you have the most experience in--unless this is your chance to branch out and try something new, then by all means, accomplish your task while learning something new at the same time.
I think this is just another instance of "which language is best?" which can never be answered, except by the individual.
When I learned about objects in Perl, first thing I thought was why it's so complicated when Perl is usually trying to keep things simple.
With Moose I see that uncomplicated OOP is possible in Perl. It sort of bring OOP of Perl back to manageable level.
(yes, I admit, I don't like perl's object design.)
Although this was asked 10 years ago, much has changed in the Perl world. I think most people now see that Moose didn't deliver all people thought it might. There are several derivative projects, lots of MooseX shims, and so on. That much churn is the code smell of something that's close to useful but not quite there. Even Stevan Little, the original author, was working on it's replacement, Moxie, but it never really went anywhere. I don't think anyone was ever completely satisfied with Moose.
Currently, the future is probably not Moose, irrespective of your estimation of that framework. I have a chapter for Moose in Intermediate Perl, but I'd remove it in the next edition. That's not a decision from quality, just on-the-ground truth from what's happening in that space.
Ovid is working on Corrina, a similar but also different framework that tries to solve some of the same problems. Ovid outlines various things he wants to improve, so you can use that as a basis for your own decision. No matter what you think about either Moose or Corrina, I think the community is now transitioning out of its Moose phase.
You may want to listen to How Moose made me a bad OO programmer, a short talk from Tadeusz Sośnierz at PerlCon 2019.
Even if your skeptical, I'd say try Moose in a small project before you commit to reconfiguring a large codebase. Then, try it in a medium project. I tend to think that frameworks like Moose look appealing in the sorts of examples that fit on a page, but don't show their weaknesses until the problems get much more complex. Don't go all in until you know a little more.
You should at least know the basics of Moose, though, because you are likely to run into other code that uses it.
The Perl 5 Porters may even include this one in core perl, but a full delivery may happen in the five to ten year range, and even then you'd have to insist on a supported Perl. Most people I encounter are about three to five years behind on Perl versions. If you control your perl version, that's not a problem. Not everyone does though. Holding out for Corrina may not be a good short-term plan.

Language requirements for AI development [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Why is Lisp used for AI?
What makes a language suitable for Artificial Intelligence development?
I've heard that LISP and Prolog are widely used in this field. What features make them suitable for AI?
Overall I would say the main thing I see about languages "preferred" for AI is that they have high order programming along with many tools for abstraction.
It is high order programming (aka functions as first class objects) that tends to be a defining characteristic of most AI languages http://en.wikipedia.org/wiki/Higher-order_programming that I can see. That article is a stub and it leaves out Prolog http://en.wikipedia.org/wiki/Prolog which allows high order "predicates".
But basically high order programming is the idea that you can pass a function around like a variable. Surprisingly a lot of the scripting languages have functions as first class objects as well. LISP/Prolog are a given as AI languages. But some of the others might be surprising. I have seen several AI books for Python. One of them is http://www.nltk.org/book. Also I have seen some for Ruby and Perl. If you study more about LISP you will recognize a lot of its features are similar to modern scripting languages. However LISP came out in 1958...so it really was ahead of its time.
There are AI libraries for Java. And in Java you can sort of hack functions as first class objects using methods on classes, it is harder/less convenient than LISP but possible. In C and C++ you have function pointers, although again they are much more of a bother than LISP.
Once you have functions as first class objects, you can program much more generically than is otherwise possible. Without functions as first class objects, you might have to construct sum(array), product(array) to perform the different operations. But with functions as first class objects you could compute accumulate(array, +) and accumulate(array, *). You could even do accumulate(array, getDataElement, operation). Since AI is so ill defined that type of flexibility is a great help. Now you can build much more generic code that is much easier to extend in ways that were not originally even conceived.
And Lambda (now finding its way all over the place) becomes a way to save typing so that you don't have to define every function. In the previous example, instead of having to make getDataElement(arrayelement) { return arrayelement.GPA } somewhere you can just say accumulate(array, lambda element: return element.GPA, +). So you don't have to pollute your namespace with tons of functions to only be called once or twice.
If you go back in time to 1958, basically your choices were LISP, Fortran, or Assembly. Compared to Fortran LISP was much more flexible (unfortunately also less efficient) and offered much better means of abstraction. In addition to functions as first class objects, it also had dynamic typing, garbage collection, etc. (stuff any scripting language has today). Now there are more choices to use as a language, although LISP benefited from being first and becoming the language that everyone happened to use for AI. Now look at Ruby/Python/Perl/JavaScript/Java/C#/and even the latest proposed standard for C you start to see features from LISP sneaking in (map/reduce, lambdas, garbage collection, etc.). LISP was way ahead of its time in the 1950's.
Even now LISP still maintains a few aces in the hole over most of the competition. The macro systems in LISP are really advanced. In C you can go and extend the language with library calls or simple macros (basically a text substitution). In LISP you can define new language elements (think your own if statement, now think your own custom language for defining GUIs). Overall LISP languages still offer ways of abstraction that the mainstream languages still haven't caught up with. Sure you can define your own custom compiler for C and add all the language constructs you want, but no one does that really. In LISP the programmer can do that easily via Macros. Also LISP is compiled and per the programming language shootout, it is more efficient than Perl, Python, and Ruby in general.
Prolog basically is a logic language made for representing facts and rules. What are expert systems but collections of rules and facts. Since it is very convenient to represent a bunch of rules in Prolog, there is an obvious synergy there with expert systems.
Now I think using LISP/Prolog for every AI problem is not a given. In fact just look at the multitude of Machine Learning/Data Mining libraries available for Java. However when you are prototyping a new system or are experimenting because you don't know what you are doing, it is way easier to do it with a scripting language than a statically typed one. LISP was the earliest languages to have all these features we take for granted. Basically there was no competition at all at first.
Also in general academia seems to like functional languages a lot. So it doesn't hurt that LISP is functional. Although now you have ML, Haskell, OCaml, etc. on that front as well (some of these languages support multiple paradigms...).
The main calling card of both Lisp and Prolog in this particular field is that they support metaprogramming concepts like lambdas. The reason that is important is that it helps when you want to roll your own programming language within a programming language, like you will commonly want to do for writing expert system rules.
To do this well in a lower-level imperative language like C, it is generally best to just create a separate compiler or language library for your new (expert system rule) language, so you can write your rules in the new language and your actions in C. This is the principle behind things like CLIPS.
The two main things you want are the ability to do experimental programming and the ability to do unconventional programming.
When you're doing AI, you by definition don't really know what you're doing. (If you did, it wouldn't be AI, would it?) This means you want a language where you can quickly try things and change them. I haven't found any language I like better than Common Lisp for that, personally.
Similarly, you're doing something not quite conventional. Prolog is already an unconventional language, and Lisp has macros that can transform the language tremendously.
What do you mean by "AI"? The field is so broad as to make this question unanswerable. What applications are you looking at?
LISP was used because it was better than FORTRAN. Prolog was used, too, but no one remembers that. This was when people believed that symbol-based approaches were the way to go, before it was understood how hard the sensing and expression layers are.
But modern "AI" (machine vision, planners, hell, Google's uncanny ability to know what you 'meant') is done in more efficient programming languages that are more sustainable for a large team to develop in. This usually means C++ these days--but it's not like anyone thinks of C++ as a good language for AI.
Hell, you can do a lot of what was called "AI" in the 70s in MATLAB. No one's ever called MATLAB "a good language for AI" before, have they?
Functional programming languages are easier to parallelise due to their stateless nature. There seems to already be a subject about it with some good answers here: Advantages of stateless programming?
As said, its also generally simpler to build programs that generate programs in LISP due to the simplicity of the language, but this is only relevant to certain areas of AI such as evolutionary computation.
Edit:
Ok, I'll try and explain a bit about why parallelism is important to AI using Symbolic AI as an example, as its probably the area of AI that I understand best. Basically its what everyone was using back in the day when LISP was invented, and the Physical Symbol Hypothesis on which it is based is more or less the same way you would go about calculating and modelling stuff in LISP code. This link explains a bit about it:
http://www.cs.st-andrews.ac.uk/~mkw/IC_Group/What_is_Symbolic_AI_full.html
So basically the idea is that you create a model of your environment, then searching through it to find a solution. One of the simplest to algorithms to implement is a breadth first search, which is an exhaustive search of all possible states. While producing an optimal result, it is usually prohibitively time consuming. One way to optimise this is by using a heuristic (A* being an example), another is to divide the work between CPUs.
Due to statelessness, in theory, any node you expand in your search could be ran in a separate thread without the complexity or overhead involved in locking shared data. In general, assuming the hardware can support it, then the more highly you can parallelise a task the faster you will get your result. An example of this could be the folding#home project, which distributes work over many GPUs to find optimal protein folding configurations (that may not have anything to do with LISP, but is relevant to parallelism).
As far as I know from LISP is that is a Functional Programming Language, and with it you are able to make "programs that make programs. I don't know if my answer suits your needs, see above links for more information.
Pattern matching constructs with instantiation (or the ability to easily construct pattern matching code) are a big plus. Pattern matching is not totally necessary to do A.I., but it can sure simplify the code for many A.I. tasks. I'm finding this also makes F# a convenient language for A.I.
Languages per se (without libraries) are suitable/comfortable for specific areas of research/investigation and/or learning/studying ("how to do the simplest things in the hardest way").
Suitability for commercial development is determined by availability of frameworks, libraries, development tools, communities of developers, adoption by companies. For ex., in internet you shall find support for any, even the most exotic issue/areas (including, of course, AI areas), for ex., in C# because it is mainstream.
BTW, what specifically is context of question? AI is so broad term.
Update:
Oooops, I really did not expect to draw attention and discussion to my answer.
Under ("how to do the simplest things in the hardest way"), I mean that studying and learning, as well as academic R&D objectives/techniques/approaches/methodology do not coincide with objectives of (commercial) development.
In student (or even academic) projects one can write tons of code which would probably require one line of code in commercial RAD (using of component/service/feature of framework or library).
Because..! oooh!
Because, there is no sense to entangle/develop any discussion without first agreeing on common definitions of terms... which are subjective and depend on context... and are not so easy to be formulate in general/abstract context.
And this is inter-disciplinary matter of whole areas of different sciences
The question is broad (philosophical) and evasively formulated... without beginning and end... having no definitive answers without of context and definitions...
Are we going to develop here some spec proposal?

How do you do Design by Contract in Perl?

I'm investigating using DbC in our Perl projects, and I'm trying to find the best way to verify contracts in the source (e.g. checking pre/post conditions, invariants, etc.)
Class::Contract was written by Damian Conway and is now maintained by C. Garret Goebel, but it looks like it hasn't been touched in over 8 years.
It looks like what I want to use is Moose, as it seems as though it might offer functionality that could be used for DbC, but I was wondering if anyone had any resources (articles, etc.) on how to go about this, or if there are any helpful modules out there that I haven't been able to find.
Is anyone doing DbC with Perl? Should I just "jump in" to Moose and see what I can get it to do for me?
Moose gives you a lot of the tools (if not all the sugar) to do DbC. Specifically, you can use the before, after and around method hooks (here's some examples) to perform whatever assertions you might want to make on arguments and return values.
As an alternative to "roll your own DbC" you could use a module like MooseX::Method::Signatures or MooseX::Method to take care of validating parameters passed to a subroutine. These modules don't handle the "post" or "invariant" validations that DbC typically provides, however.
EDIT: Motivated by this question, I've hacked together MooseX::Contract and uploaded it to the CPAN. I'd be curious to get feedback on the API as I've never really used DbC first-hand.
Moose is an excellent oo system for perl, and I heartily recommend it for anyone coding objects in perl. You can specify "subtypes" for your class members that will be enforced when set by accessors or constructors (the same system can be used with the Moose::Methods package for functions). If you are coding more than one liners, use Moose;
As for doing DbC, well, might not be the best fit for perl5. It's going to be hard in a language that offers you very few guarantees. Personally, in a lot of dynamic languages, but especially perl, I tend to make my guiding philosophy DRY, and test-driven development.
I would also recommend using Moose.
However as an "alternative" take a look at Sub::Contract.
To quote the author....
Sub::Contract offers a pragmatic way to implement parts of the programming by contract paradigm in Perl.
Sub::Contract is not a design-by-contract framework.
Sub::Contract aims at making it very easy to constrain subroutines input arguments and return values in order to emulate strong typing at runtime.
If you don't need class invariants, I've found the following Perl Hacks book recommendation to be a good solution for some programs. See Smart::Comments.

When should I use OO Perl?

I'm just learning Perl.
When is it advisable to use OO Perl instead of non-OO Perl?
My tendency would be to always prefer OO unless the project is just a code snippet of < 10 lines.
TIA
From Damian Conway:
10 criteria for knowing when to use object-oriented design
Design is large, or is likely to become large
When data is aggregated into obvious structures, especially if there’s a lot of data in each aggregate
For instance, an IP address is not a good candidate: There’s only 4 bytes of information related to an IP address. An immigrant going through customs has a lot of data related to him, such as name, country of origin, luggage carried, destination, etc.
When types of data form a natural hierarchy that lets us use inheritance.
Inheritance is one of the most powerful feature of OO, and the ability to use it is a flag.
When operations on data varies on data type
GIFs and JPGs might have their cropping done differently, even though they’re both graphics.
When it’s likely you’ll have to add data types later
OO gives you the room to expand in the future.
When interactions between data is best shown by operators
Some relations are best shown by using operators, which can be overloaded.
When implementation of components is likely to change, especially in the same program
When the system design is already object-oriented
When huge numbers of clients use your code
If your code will be distributed to others who will use it, a standard interface will make maintenence and safety easier.
When you have a piece of data on which many different operations are applied
Graphics images, for instance, might be blurred, cropped, rotated, and adjusted.
When the kinds of operations have standard names (check, process, etc)
Objects allow you to have a DB::check, ISBN::check, Shape::check, etc without having conflicts between the types of check.
There is a good discussion about same subject # PerlMonks.
Having Moose certainly makes it easier to always use OO from the word go. The only real exception is if compilation start-up is an issue (Moose does currently have a compile time overhead).
I don't think you should measure it by lines of code.
You are right, often when you are just writing a simple script OO is probably too much overhead, but I think you should be more flexible regarding the 10 lines aproach.
In all cases when you are using OO Perl Rememebr to use Moose (or Mouse)
This question doesn't have that much to do with Perl. The question is "when, given a choice, should I use OO?" That "given a choice" bit is because in some languages (Java, for example), you really don't have any choice.
The answer is "when it makes sense". Think about the problem you're trying to solve. Does the problem fit into the OO concepts of classes and object? If it does, great, use OO. Otherwise use some other paradigm.
Perl is fairly flexible, and you can easily write procedural, functional, or OO Perl, or even mix them together. Don't get hung up on doing OO because everyone else is. Learn to use the right approach for each task.
All of this takes experience and practice, so make sure to try all these approaches out, and maybe even take some smaller problems and solve them in multiple ways to see how each works.
Damian Conway has a passage in Perl Best Practices about this. It is not a rule that you have to follow it, but it is probably better advice that I can give without knowing a lot about what you are doing.
Here is the publisher's page if that is a better place to link to the book.