What is best practice in Perl when data is passed incorrectly to a subroutine? Should the sub die or just return?
Here is what I usually do
my #text = ('line 1', 'line 2');
print_text(\#text)
or die "ERROR: something went wrong in the sub";
sub print_text{
my ($aref_text) = #_;
return unless ref($aref_text) eq "ARRAY";
print "$_\n" for #{$aref_text};
return 1;
}
Here the sub just returns if the passed input is invalid and it expects the caller to check for errors as it does here. I wonder if it is always a better practice to just "die" at the sub level. In big scripts, I'm afraid of doing that because I don't want to kill the entire script just because some simple sub fails.
On the other hand, I'm afraid of just returning because if the caller forgets to check if the sub returns true, then the script will keep going and weird stuff could happen.
Thanks
This falls squarely under the question of how to deal with errors in subroutines in general.
In principle, these are ways to handle errors in subroutines that can't themselves recover
return codes, some of which indicate errors
return "special" values, like undef in Perl
throw exceptions, and a device for that in Perl is die
The caller either checks the return, or tests for undef, or uses eval† to catch and handle the die. What is most suitable depends entirely on the context and on what the code does.
I don't see much reason in modern languages to be restrained to "codes" (like negative values) that indicate errors. For one thing, that either interferes will legitimate returns or it constrains them to go via pointer/reference, which is a big design decision.
Returning undef is often a good middle-of-the-road approach, in particular in code that isn't overly complex. It indicates some "failure" of the sub to perform what it is meant to. However, even in the smallest of subs undef may be suitable to indicate a result that isn't acceptable. Then if it is also used for bad input we have a problem of distinguishing between those failings.
Throwing an exception, based in Perl on the simple die, adds more possibilities. In complex code you may well want to write (or use) an error-handling class that mimics a more elaborate exception handling support from languages that have it, and then throw that
my $error_obj = ErrorHandlingClass->new( params );
... or die $error_obj;
Then the calling code can analyze the object. This would be the most structured way to do it.
A nice and simple example is Path::Tiny, with its own Path::Tiny::Error found in its source.
Again, what is suitable in any one particular case depends on details of that application.
A few comments on direct questions.
The dilemma of what to return is stressed by the information-free message in die (it tells us nothing of what failed). But how do we make the failure informative, in this case?
Note that your or results in a die if the sub returns 0 or an empty string. If we replace it with // (defined-or), so to die on undef, we still can't print a specific message if undef may also indicate a bad result.
So in this case you may want the function to die on bad input, with a suitable message.
That would do it for debugging after there's been a problem. If the code needs to be able to recover then you'd better return more structured information -- throw (or return) an object of an error handling class you'd write. (As an ad hoc stop-gap measure you can parse the message from die.)
As for the age-old question of discipline to check returns, a die is a good tool. There is no "simple sub" that is unworthy – you do not want to proceed with an error so it's OK to die. And in complex projects error handling is more complex, so we need more tools and structure, not less.
Recall that exceptions "bubble up", propagate up the call stack if unhandled, and so does die. This can be used nicely for debugging without having eval on every single call. In the end, most of this is a part of debugging.
There is no "best practice" for this. But a default of die-ing is rather reasonable.
† By now we seem to be getting a try-catch style handling of an exception (die) support in the core. It is introduced as experimental in 5.34.0, but they recommend using Feature::Compat::Try for now. This is
ported from Syntax::Keyword::Try.
Related
I am trying to extract data from website using Perl API. I am using a list of URIs to get the data from the website.
Initially the problem was that if there was no data available for the URI it would die and I wanted it to skip that particular URI and go to the next available URI. I used next unless ....; to come over this problem.
Now the problem is I am trying to extract specific data from the web by calling a specific method (called as identifiers()) from the API. Now the data is available for the URI but the specific data (the identifiers), what I am looking for, is not available and it dies.
I tried to use eval{} like this
eval {
for $bar ($foo->identifiers()){
#do something
};
}
When I use eval{} I think it skips the error and moves ahead but I am not sure. Because the error it gives is Invalid content type in response:text/plain.
Whereas I checked the URI manually, though it doesn't have the identifiers it has rest of the data. I want this to skip and move to next URI. How can I do that?
OK, I think I understand your question, but a little more code would have helped, as would specifying which Perl API -- not that it seems to matter to the answer, but it is a big part of your question. Having said that, the problem seems very simple.
When Perl hits an error, like most languages, it runs out through the calling contexts in order until it finds a place where it can handle the error. Perl's most basic error handling is eval{} (but I'd use Try::Tiny if you can, as it is then clearer that you're doing error handling instead of some of the other strange things eval can do).
Anyway, when Perl hits eval{}, the whole of eval{} exits, and $& is set to the error. So, having the eval{} outside the loop means errors will leave the loop. If you put the eval{} inside the loop, when an error occurs, eval{} will exit, but you will carry on to the next iteration. It's that simple.
I also detect signs that maybe you're not using use strict; and use warnings;. Please do, as they help you find many bugs quicker.
A discussion in another question got me wondering: what do other programming languages' exception systems have that Perl's lacks?
Perl's built-in exceptions are a bit ad-hoc in that they were, like the Perl 5 object system, sort-of bolted on as an afterthought, and they overload other keywords (eval and die) which are not dedicated specifically to exceptions.
The syntax can be a little ugly, compared to languages with builtin try/throw/catch type syntax. I usually do it like this:
eval {
do_something_that_might_barf();
};
if ( my $err = $# ) {
# handle $err here
}
There are several CPAN modules that provide syntactic sugar to add try/catch keywords and to allow the easy declaration of exception class hierarchies and whatnot.
The main problem I see with Perl's exception system is the use of the special global $# to hold the current error, rather than a dedicated catch-type mechanism that might be safer, from a scope perspective, though I've never personally run into any problems with $# getting munged.
The typical method most people have learned to handle exceptions is vulnerable to missing trapped exceptions:
eval { some code here };
if( $# ) { handle exception here };
You can do:
eval { some code here; 1 } or do { handle exception here };
This protects from missing the exception due to $# being clobbered, but it is still vulnerable to losing the value of $#.
To be sure you don't clobber an exception, when you do your eval, you have to localize $#;
eval { local $#; some code here; 1 } or do { handle exception here };
This is all subtle breakage, and prevention requires a lot of esoteric boilerplate.
In most cases this isn't a problem. But I have been burned by exception eating object destructors in real code. Debugging the issue was awful.
The situation is clearly bad. Look at all the modules on CPAN built provide decent exception handling.
Overwhelming responses in favor of Try::Tiny combined with the fact that Try::Tiny is not "too clever by half", have convinced me to try it out. Things like TryCatch and Exception::Class::TryCatch, Error, and on and on are too complex for me to trust. Try::Tiny is a step in the right direction, but I still don't have a lightweight exception class to use.
Try::Tiny (or modules built on top of it) is the only correct way to deal with exceptions in Perl 5. The issues involved are subtle, but the linked article explains them in detail.
Here's how to use it:
use Try::Tiny;
try {
my $code = 'goes here';
succeed() or die 'with an error';
}
catch {
say "OH NOES, YOUR PROGRAM HAZ ERROR: $_";
};
eval and $# are moving parts you don't need to concern yourself with.
Some people think this is a kludge, but having read the implementations of other languages (as well as Perl 5), it's no different than any other. There is just the $# moving part that you can get your hand caught in... but as with other pieces of machinery with exposed moving parts... if you don't touch it, it won't rip off your fingers. So use Try::Tiny and keep your typing speed up ;)
Some exception classes, e.g. Error, cannot handle flow control from within try/catch blocks. This leads to subtle errors:
use strict; use warnings;
use Error qw(:try);
foreach my $blah (#somelist)
{
try
{
somemethod($blah);
}
catch Error with
{
my $exception = shift;
warn "error while processing $blah: " . $exception->stacktrace();
next; # bzzt, this will not do what you want it to!!!
};
# do more stuff...
}
The workaround is to use a state variable and check that outside the try/catch block, which to me looks horribly like stinky n00b code.
Two other "gotchas" in Error (both of which have caused me grief as they are horrible to debug if you haven't run into this before):
use strict; use warnings;
try
{
# do something
}
catch Error with
{
# handle the exception
}
Looks sensible, right? This code compiles, but leads to bizarre and unpredictable errors. The problems are:
use Error qw(:try) was omitted, so the try {}... block will be misparsed (you may or may not see a warning, depending on the rest of your code)
missing semicolon after the catch block! Unintuitive as control blocks do not use semicolons, but in fact try is a prototyped method call.
Oh yeah, that also reminds me that because try, catch etc are method calls, that means that the call stack within those blocks will not be what you expect. (There's actually two extra stack levels because of an internal call inside Error.pm.) Consequently, I have a few modules full of boilerplate code like this, which just adds clutter:
my $errorString;
try
{
$x->do_something();
if ($x->failure())
{
$errorString = 'some diagnostic string';
return; # break out of try block
}
do_more_stuff();
}
catch Error with
{
my $exception = shift;
$errorString = $exception->text();
}
finally
{
local $Carp::CarpLevel += 2;
croak "Could not perform action blah on " . $x->name() . ": " . $errorString if $errorString;
};
A problem I recently encountered with the eval exception mechanism has to do with the $SIG{__DIE__} handler. I had -- wrongly -- assumed that this handler only gets called when the Perl interpreter is exited through die() and wanted to use this handler for logging fatal events. It then turned out that I was logging exceptions in library code as fatal errors which clearly was wrong.
The solution was to check for the state of the $^S or $EXCEPTIONS_BEING_CAUGHT variable:
use English;
$SIG{__DIE__} = sub {
if (!$EXCEPTION_BEING_CAUGHT) {
# fatal logging code here
}
};
The problem I see here is that the __DIE__ handler is used in two similar but different situations. That $^S variable very much looks like a late add-on to me. I don't know if this is really the case, though.
With Perl, language and user-written exceptions are combined: both set $#. In other languages language exceptions are separate from user-written exceptions and create a completely separate flow.
You can catch the base of user written exceptions.
If there is My::Exception::one and My::Exception::two
if ($# and $#->isa('My::Exception'))
will catch both.
Remember to catch any non-user exceptions with an else.
elsif ($#)
{
print "Other Error $#\n";
exit;
}
It's also nice to wrap the exception in a sub call the sub to throw it.
In C++ and C#, you can define types that can be thrown, with separate catch blocks that manage each type. Perl type systems have certain niggling issues related to RTTI and inheritance, according from what I read on chomatic's blog.
I'm not sure how other dynamic languages manage exceptions; both C++ and C# are static languages and that bears with it a certain power in the type system.
The philosophical problem is that Perl 5 exceptions are bolted on; they aren't built from the start of the language design as something integral to how Perl is written.
It has been a looong time since I used Perl, so my memory may be fuzzy and/or Perl may have improved, but from what I recall (in comparison with Python, which I use on a daily basis):
since exceptions are a late addition, they are not consistently supported in the core libraries
(Not true; they are not consistently supported in core libraries because the programmers that wrote those libraries don't like exceptions.)
there is no predefined hierarchy of exceptions - you can't catch a related group of exceptions by catching the base class
there is no equivalent of try:... finally:... to define code that will be called regardless of whether an exception was raised or not, e.g. to free up resources.
(finally in Perl is largely unnecessary -- objects' destructors run immediately after scope exit; not whenever there happens to be memory pressure. So you can actually deallocate any non-memory resources in your destructor, and it will work sanely.)
(as far as I can tell) you can only throw strings - you can't throw objects that have additional information
(Completely false. die $object works just as well as die $string.)
you cant get a stack trace showing you where the exception was thrown - in python you get detailed information including the source code for each line in the call stack
(False. perl -MCarp::Always and enjoy.)
it is a butt-ugly kludge.
(Subjective. It's implemented the same way in Perl as it is everywhere else. It just uses differently-named keywords.)
Don't use Exceptions for regular errors. Only Fatal problems that will stop the current execution should die. All other should be handled without die.
Example: Parameter validation of called sub: Don't die at the first problem. Check all other parameters and then decide to stop by returning something or warn and correct the faulty parameters and proceed. That do in test or development mode. But possibly die in production mode. Let the application decide this.
JPR (my CPAN login)
Greetings from Sögel, Germany
I've got a bunch of questions about how people use exceptions in Perl. I've included some background notes on exceptions, skip this if you want, but please take a moment to read the questions and respond to them.
Thanks.
Background on Perl Exceptions
Perl has a very basic built-in exception system that provides a spring-board for more sophisticated usage.
For example die "I ate a bug.\n"; throws an exception with a string assigned to $#.
You can also throw an object, instead of a string: die BadBug->new('I ate a bug.');
You can even install a signal handler to catch the SIGDIE psuedo-signal. Here's a handler that rethrows exceptions as objects if they aren't already.
$SIG{__DIE__} = sub {
my $e = shift;
$e = ExceptionObject->new( $e ) unless blessed $e;
die $e;
}
This pattern is used in a number of CPAN modules. but perlvar says:
Due to an implementation glitch, the
$SIG{DIE} hook is called even
inside an eval(). Do not use this to
rewrite a pending exception in $# , or
as a bizarre substitute for overriding
CORE::GLOBAL::die() . This strange
action at a distance may be fixed in a
future release so that $SIG{DIE}
is only called if your program is
about to exit, as was the original
intent. Any other use is deprecated.
So now I wonder if objectifying exceptions in sigdie is evil.
The Questions
Do you use exception objects? If so, which one and why? If not, why not?
If you don't use exception objects, what would entice you to use them?
If you do use exception objects, what do you hate about them, and what could be better?
Is objectifying exceptions in the DIE handler a bad idea?
Where should I objectify my exceptions? In my eval{} wrapper? In a sigdie handler?
Are there any papers, articles or other resources on exceptions in general and in Perl that you find useful or enlightening.
Cross-posted at Perlmonks.
I don't use exception objects very often; mostly because a string is usually enough and involves less work. This is because there is usually nothing the program can do about the exception. If it could have avoided the exception, it wouldn't have caused it in the first place.
If you can do something about the exceptions, use objects. If you are just going to kill the program (or some subset, say, a web request), save yourself the effort of coming up with an elaborate hierarchy of objects that do nothing more than contain a message.
As for number 4; $SIG{__DIE__} should never be used. It doesn't compose; if one module expects sigdie to work in one way, and another module is loaded that makes it work some other way, those modules can't be used in the same program anymore. So don't do that.
If you want to use objects, just do the very-boring die Object->new( ... ). It may not be exciting as some super-awesome magic somewhere, but it always works and the code does exactly what it says.
This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
How can I cleanly handle error checking in Perl?
What’s broken about exceptions in Perl?
I saw code which works like this:
do_something($param) || warn "something went wrong\n";
and I also saw code like this:
eval {
do_something_else($param);
};
if($#) {
warn "something went wrong\n";
}
Should I use eval/die in all my subroutines? Should I write all my code based on stuff returned from subroutines? Isn't eval'ing the code ( over and over ) gonna slow me down?
Block eval isn't string eval, so no, it's not slow. Using it is definitely recommended.
There are a few annoying subtleties to the way it works though (mostly annoying side-effects of the fact that $# is a global variable), so consider using Try::Tiny instead of memorizing all of the little tricks that you need to use eval defensively.
do_something($param) || warn "something went wrong\n";
In this case, do_something is expected to return an error code if something goes wrong. Either it can't die or if it does, it is a really unusual situation.
eval {
do_something_else($param);
};
if($#) {
warn "something went wrong\n";
}
Here, the assumption is that the only mechanism by which do_something_else communicates something going wrong is by throwing exceptions.
If do_something_else throws exceptions in truly exceptional situations and returns an error value in some others, you should also check its return value.
Using the block form of eval does not cause extra compilation at run time so there are no serious performance drawbacks:
In the second form, the code within the BLOCK is parsed only once--at the same time the code surrounding the eval itself was parsed--and executed within the context of the current Perl program. This form is typically used to trap exceptions more efficiently than the first (see below), while also providing the benefit of checking the code within BLOCK at compile time.
Modules that warn are very annoying. Either succeed or fail. Don't print something to the terminal and then keep running; my program can't take action based on some message you print. If the program can keep running, only print a message if you have been explicitly told that it's ok. If the program can't keep running, die. That's what it's for.
Always throw an exception when something is wrong. If you can fix the problem, fix it. If you can't fix the problem, don't try; just throw the exception and let the caller deal with it. (And if you can't handle an exception from something you call, don't.)
Basically, the reason many programs are buggy is because they try to fix errors that they can't. A program that dies cleanly at the first sign of a problem is easy to debug and fix. A program that keeps running when it's confused just corrupts data and annoys everyone. So don't do that. Die as soon as possible.
Your two examples do entirely different things. The first checks for a false return value, and takes some action in response. The second checks for an actual death of the called code.
You'll have to decide for yourself which action is appropriate in each case. I would suggest simply returning false in most circumstances. You should only be explicitly dieing if you have encountered errors so severe that you cannot continue (or there is no point in continuing, but even then you could still return false).
Wrapping a block in eval {} is not the same thing as wrapping arbitrary code in eval "". In the former case, the code is still parsed at compile-time, and you do not incur any extra overhead. You will simply catch any death of that code (but you won't have any indication as to what went wrong or how far you got in your code, except for the value that is left for you in $#). In the latter case, the code is treated as a simple string by the Perl interpreter until it is actually evaluated, so there is a definite cost here as the interpreter is invoked (and you lose all compile-time checking of your code).
Incidentally, the way you called eval and checked for the value of $# is not a recommended form; for an extensive discussion of exception gotchas and techniques in Perl, see this discussion.
The first version is very "perlish" and pretty straightforward to understand. The only drawback of this idiom is that it is readable only for short cases. If error handling needs more logic, use the second version.
Nobody's really addressed the "best practice" part of this yet, so I'll jump in.
Yes, you should definitely throw an exception in your code when something goes wrong, and you should do it as early as possible (so you limit the code that needs to be debugged to work out what's causing it).
Code that does stuff like return undef to signify failure isn't particularly reliable, simply because people will tend to use it without checking for the undef returnvalue - meaning that a variable they assume has something meaningful in it actually may not. This leads to complicated, hard to debug problems, and even unexpected problems cropping up later in previously-working code.
A more solid approach is to write your code so that it dies if something goes wrong, and then only if you need to recover from that failure, wrap the any calls to it in eval{ .. } (or, better, try { .. } catch { .. } from Try::Tiny, as has been mentioned). In most cases, there won't be anything meaningful that the calling code can do to recover, so calling code remains simple in the common case, and you can just assume you'll get a useful value back. If something does go wrong, then you'll get an error message from the actual part of the code that failed, rather than silently getting an undef. If your calling code can do something to recover failures, then it can arrange to catch exceptions and do whatever it needs to.
Something that's worth reading about is Exception classes, which are a structured way to send extra information to calling code, as well as allow it to pick which exceptions it wants to catch and which it can't handle. You probably won't want to use them everywhere in your code, but they're a useful technique when you have something complicated that can fail in equally complicated ways, and you want to arrange for failures to be recoverable.
Question:
What is considered to be "Best practice" - and why - of handling errors in a constructor?.
"Best Practice" can be a quote from Schwartz, or 50% of CPAN modules use it, etc...; but I'm happy with well reasoned opinion from anyone even if it explains why the common best practice is not really the best approach.
As far as my own view of the topic (informed by software development in Perl for many years), I have seen three main approaches to error handling in a perl module (listed from best to worst in my opinion):
Construct an object, set an invalid flag (usually "is_valid" method). Often coupled with setting error message via your class's error handling.
Pros:
Allows for standard (compared to other method calls) error handling as it allows to use $obj->errors() type calls after a bad constructor just like after any other method call.
Allows for additional info to be passed (e.g. >1 error, warnings, etc...)
Allows for lightweight "redo"/"fixme" functionality, In other words, if the object that is constructed is very heavy, with many complex attributes that are 100% always OK, and the only reason it is not valid is because someone entered an incorrect date, you can simply do "$obj->setDate()" instead of the overhead of re-executing entire constructor again. This pattern is not always needed, but can be enormously useful in the right design.
Cons: None that I'm aware of.
Return "undef".
Cons: Can not achieve any of the Pros of the first solution (per-object error messages outside of global variables and lightweight "fixme" capability for heavy objects).
Die inside the constructor. Outside of some very narrow edge cases, I personally consider this an awful choice for too many reasons to list on the margins of this question.
UPDATE: Just to be clear, I consider the (otherwise very worthy and a great design) solution of having very simple constructor that can't fail at all and a heavy initializer method where all the error checking occurs to be merely a subset of either case #1 (if initializer sets error flags) or case #3 (if initializer dies) for the purposes of this question. Obviously, choosing such a design, you automatically reject option #2.
It depends on how you want your constructors to behave.
The rest of this response goes into my personal observations, but as with most things Perl, Best Practices really boils down to "Here's one way to do it, which you can take or leave depending on your needs." Your preferences as you described them are totally valid and consistent, and nobody should tell you otherwise.
I actually prefer to die if construction fails, because we set it up so that the only types of errors that can occur during object construction really are big, obvious errors that should halt execution.
On the other hand, if you prefer that doesn't happen, I think I'd prefer 2 over 1, because it's just as easy to check for an undefined object as it is to check for some flag variable. This isn't C, so we don't have a strong typing constraint telling us that our constructor MUST return an object of this type. So returning undef, and checking for that to establish success or failure, is a great choice.
The 'overhead' of construction failure is a consideration in certain edge cases (where you can't quickly fail before incurring overhead), so for those you might prefer method 1. So again, it depends on what semantics you've defined for object construction. For example, I prefer to do heavyweight initialization outside of construction. As to standardization, I think that checking whether a constructor returns a defined object is as good a standard as checking a flag variable.
EDIT: In response to your edit about initializers rejecting case #2, I don't see why an initializer can't simply return a value that indicates success or failure rather than setting a flag variable. Actually, you may want to use both, depending on how much detail you want about the error that occurred. But it would be perfectly valid for an initializer to return true on success and undef on failure.
I prefer:
Do as little initialization as possible in the constructor.
croak with an informative message when something goes wrong.
Use appropriate initialization methods to provide per object error messages etc
In addition, returning undef (instead of croaking) is fine in case the users of the class may not care why exactly the failure occurred, only if they got a valid object or not.
I despise easy to forget is_valid methods or adding extra checks to ensure methods are not called when the internal state of the object is not well defined.
I say these from a very subjective perspective without making any statements about best practices.
I would recommend against #1 simply because it leads to more error handling code which will not be written. For example, if you just return false then this works fine.
my $obj = Class->new or die "Construction failed...";
But if you return an object which is invalid...
my $obj = Class->new;
die "Construction failed #{[ $obj->error_message ]}" if $obj->is_valid;
And as the quantity of error handling code increases the probability of it being written decreases. And its not linear. By increasing the complexity of your error handling system you actually decrease the amount of errors it will catch in practical use.
You also have to be careful that your invalid object in question dies when any method is called (aside from is_valid and error_message) leading to yet more code and opportunities for mistakes.
But I agree there is value in being able to get information about the failure, which makes returning false (just return not return undef) inferior. Traditionally this is done by calling a class method or global variable as in DBI.
my $dbh = DBI->connect($data_source, $username, $password)
or die $DBI::errstr;
But it suffers from A) you still have to write error handling code and B) its only valid for the last operation.
The best thing to do, in general, is throw an exception with croak. Now in the normal case the user writes no special code, the error occurs at the point of the problem, and they get a good error message by default.
my $obj = Class->new;
Perl's traditional recommendations against throwing exceptions in library code as being impolite is outdated. Perl programmers are (finally) embracing exceptions. Rather than writing error handling code ever and over again, badly and often forgetting, exceptions DWIM. If you're not convinced just start using autodie (watch pjf's video about it) and you'll never go back.
Exceptions align Huffman encoding with actual use. The common case of expecting the constructor to just work and wanting an error if it doesn't is now the least code. The uncommon case of wanting to handle that error requires writing special code. And the special code is pretty small.
my $obj = eval { Class->new } or do { something else };
If you find yourself wrapping every call in an eval you are doing it wrong. Exceptions are called that because they are exceptional. If, as in your comment above, you want graceful error handling for the user's sake, then take advantage of the fact that errors bubble up the stack. For example, if you want to provide a nice user error page and also log the error you can do this:
eval {
run_the_main_web_code();
} or do {
log_the_error($#);
print_the_pretty_error_page;
};
You only need it in one place, at top of your call stack, rather than scattered everywhere. You can take advantage of this at smaller increments, for example...
my $users = eval { Users->search({ name => $name }) } or do {
...handle an error while finding a user...
};
There's two things going on. 1) Users->search always returns a true value, in this case an array ref. That makes the simple my $obj = eval { Class->method } or do work. That's optional. But more importantly 2) you only need to put special error handling around Users->search. All the methods called inside Users->search and all the methods they call... they just throw exceptions. And they're all caught at one point and handled the same. Handling the exception at the point which cares about it makes for much neater, compact and flexible error handling code.
You can pack more information into the exception by croaking with a string overloaded object rather than just a string.
my $obj = eval { Class->new }
or die "Construction failed: $# and there were #{[ $#->num_frobnitz ]} frobnitzes";
Exceptions:
Do the right thing without any thought by the caller
Require the least code for the most common case
Provide the most flexibility and information about the failure to the caller
Modules such as Try::Tiny fix most of the hanging issues surrounding using eval as an exception handler.
As for your use case where you might have a very expensive object and want to try and continue with it partially build... smells like YAGNI to me. Do you really need it? Or you have a bloated object design which is doing too much work too early. IF you do need it, you can put the information necessary to continue the construction in the exception object.
First the pompous general observations:
A constructor's job should be: Given valid construction parameters, return a valid object.
A constructor that does not construct a valid object cannot perform its job and is therefore a perfect candidate for exception generation.
Making sure the constructed object is valid is part of the constructor's job. Handing out a known-to-be-bad object and relying on the client to check that the object is valid is a surefire way to wind up with invalid objects that explode in remote places for non-obvious reasons.
Checking that all the correct arguments are in place before the constructor call is the client's job.
Exceptions provide a fine-grained way of propagating the particular error that occurred without needing to have a broken object in hand.
return undef; is always bad[1]
bIlujDI' yIchegh()Qo'; yIHegh()!
Now to the actual question, which I will construe to mean "what do you, darch, consider the best practice and why". First, I'll note that returning a false value on failure has a long Perl history (most of the core works that way, for example), and a lot of modules follow this convention. However, it turns out this convention produces inferior client code and newer modules are moving away from it.[2]
[The supporting argument and code samples for this turn out to be the more general case for exceptions that prompted the creation of autodie, and so I will resist the temptation to make that case here. Instead:]
Having to check for successful creation is actually more onerous than checking for an exception at an appropriate exception-handling level. The other solutions require the immediate client to do more work than it should have to just to obtain an object, work that is not required when the constructor fails by throwing an exception.[3] An exception is vastly more expressive than undef and equally expressive as passing back a broken object for purposes of documenting errors and annotating them at various levels in the call stack.
You can even get the partially-constructed object if you pass it back in the exception. I think this is a bad practice per my belief about what a constructor's contract with its clients ought to be, but the behavior is supported. Awkwardly.
So: A constructor that cannot create a valid object should throw an exception as early as possible. The exceptions a constructor can throw should be documented parts of its interface. Only the calling levels that can meaningfully act on the exception should even look for it; very often, the behavior of "if this construction fails, don't do anything" is exactly correct.
[1]: By which I mean, I am not aware of any use cases where return; is not strictly superior. If someone calls me on this I might have to actually open a question. So please don't. ;)
[2]: Per my extremely unscientific recollection of the module interfaces I've read in the last two years, subject to both selection and confirmation biases.
[3]: Note that throwing an exception does still require error-handling, as would the other proposed solutions. This does not mean wrapping every instantiation in an eval unless you actually want to do complex error-handling around every construction (and if you think you do, you're probably wrong). It means wrapping the call which is able to meaningfully act on the exception in an eval.