Consider the following lookup function that I wrote, which is using optionals and optional binding, reports a message if key is not found in the dictionary
func lookUp<T:Equatable>(key:T , dictionary:[T:T]) -> T? {
for i in dictionary.keys {
if i == key{
return dictionary[i]
}
}
return nil
}
let dict = ["JO":"Jordan",
"UAE":"United Arab Emirates",
"USA":"United States Of America"
]
if let a = lookUp( "JO",dictionary:dict ) {
print(a) // prints Jordan
} else {
print("cant find value")
}
I have rewritten the following code, but this time, using error handling, guard statement, removing -> T? and writing an enum which conforms to ErrorType:
enum lookUpErrors : ErrorType {
case noSuchKeyInDictionary
}
func lookUpThrows<T:Equatable>(key:T , dic:[T:T])throws {
for i in dic.keys{
guard i == key else {
throw lookUpErrors.noSuchKeyInDictionary
}
print(dic[i]!)
}
}
do {
try lookUpThrows("UAE" , dic:dict) // prints united arab emirates
}
catch lookUpErrors.noSuchKeyInDictionary{
print("cant find value")
}
Both functions work well but:
which function grants better performance
which function is "safer"
which function is recommended (based on pros and cons)
Performance
The two approaches should have comparable performance. Under the hood they are both doing very similar things: returning a value with a flag that is checked, and only if the flag shows the result is valid, proceeding. With optionals, that flag is the enum (.None vs .Some), with throws that flag is an implicit one that triggers the jump to the catch block.
It's worth noting your two functions don't do the same thing (one returns nil if no key matches, the other throws if the first key doesn’t match).
If performance is critical, then you can write this to run much faster by eliminating the unnecessary key-subscript lookup like so:
func lookUp<T:Equatable>(key:T , dictionary:[T:T]) -> T? {
for (k,v) in dictionary where k == key {
return v
}
return nil
}
and
func lookUpThrows<T:Equatable>(key:T , dictionary:[T:T]) throws -> T {
for (k,v) in dic where k == key {
return v
}
throw lookUpErrors.noSuchKeyInDictionary
}
If you benchmark both of these with valid values in a tight loop, they perform identically. If you benchmark them with invalid values, the optional version performs about twice the speed, so presumably actually throwing has a small bit of overhead. But probably not anything noticeable unless you really are calling this function in a very tight loop and anticipating a lot of failures.
Which is safer?
They're both identically safe. In neither case can you call the function and then accidentally use an invalid result. The compiler forces you to either unwrap the optional, or catch the error.
In both cases, you can bypass the safety checks:
// force-unwrap the optional
let name = lookUp( "JO", dictionary: dict)!
// force-ignore the throw
let name = try! lookUpThrows("JO" , dic:dict)
It really comes down to which style of forcing the caller to handle possible failure is preferable.
Which function is recommended?
While this is more subjective, I think the answer’s pretty clear. You should use the optional one and not the throwing one.
For language style guidance, we need only look at the standard library. Dictionary already has a key-based lookup (which this function duplicates), and it returns an optional.
The big reason optional is a better choice is that in this function, there is only one thing that can go wrong. When nil is returned, it is for one reason only and that is that the key is not present in the dictionary. There is no circumstance where the function needs to indicate which reason of several that it threw, and the reason nil is returned should be completely obvious to the caller.
If on the other hand there were multiple reasons, and maybe the function needs to return an explanation (for example, a function that does a network call, that might fail because of network failure, or because of corrupt data), then an error classifying the failure and maybe including some error text would be a better choice.
The other reason optional is better in this case is that failure might even be expected/common. Errors are more for unusual/unexpected failures. The benefit of returning an optional is it’s very easy to use other optional features to handle it - for example optional chaining (lookUp("JO", dic:dict)?.uppercaseString) or defaulting using nil-coalescing (lookUp("JO", dic:dict) ?? "Team not found"). By contrast, try/catch is a bit of a pain to set up and use, unless the caller really wants "exceptional" error handling i.e. is going to do a bunch of stuff, some of which can fail, but wants to collect that failure handling down at the bottom.
#AirspeedVelocity already has a great answer, but I think it's worth going a bit further into why to use optionals vs errors.
There are basically four ways for something to go wrong:
Simple error: it fails in only one way, so you don't need to care about why something went wrong. The problem may come either from programmer logic or user data, so you need to be able to handle it at run time and design around it when coding.
This is the case for things like initializing an Int from a String (either the string is parseable as an integer or it's not) or dictionary-style lookups (either there's a value for the key or there's not). Optionals work really well for this in Swift.
Logic error: this is the kind of error that (in theory) comes up only during development, as a result of Doing It Wrong — for example, indexing beyond the bounds of an array.
In ObjC, NSException covers these kinds of cases. In Swift, we have functions like fatalError. I'd assume that part of why NSException isn't surfaced in Swift is that once your program encounters a logic error, it's not really safe to assume anything about its further operation. Logic errors should either be caught during development or cause a (nicely debuggable) crash rather than letting the program continue in an undefined (and thus unsafe) state.
Universal error: there are loads of ways to fail, but they aren't very connected to programmer logic or user action. You might run out of memory to allocate, get a low-level interrupt, or (wait for it...) overflow the stack, but those can happen with almost anything you do and not really because of any specific thing you do.
You see universal errors getting surfaced as exceptions in some other languages, but that means that you have to code around the possibility of any and every call you make being able to fail. And at that point you're writing more error handling than you are actual code.
Recoverable error: This is for when there are lots of ways to go wrong, but not in ways that preclude further operation, and what a program does upon encountering an error might change depending on what kind of error it is. Filesystems and networking are the common examples here: if you can't load a file, it might be because the user got the name wrong (so you should tell the user that) or because the wifi momentarily dropped and will be back shortly (so you might forego the alert and just try again).
In Cocoa, historically, this is what NSError parameters are for. Swift's error handling makes this pattern part of the language.
So, when you're writing new API (for yourself or someone else to call) in Swift, or using new ObjC annotations to make an existing API easier to use from Swift, think about what kind of errors you're dealing with.
Is there only one clear way to fail that isn't a result of API misuse? Use an Optional return type.
Can something fail only if a client doesn't follow your API contract — say, if you're writing a container class that has a subscript and a count, or requiring that some specific sequence of calls be made? Don't burden every bit of code that uses your API with error handling or optional unwrapping — just fatalError or assert (or throw NSException if your Swift API is a front to ObjC code) and document what the right way is for people to use your API.
Okay, so your ObjC init method returns nil iff [super init] returns nil. So should you mark your initializer as failable for Swift or add an error out-parpameter? Think about when that really happens — if -[NSObject init] is returning nil, it's because you chained it off of an alloc call that returned nil. If alloc fails, it's already The End Times for your process, so it's not worth handling that case.
Do you have multiple failure cases, some or all of which might be worth reporting to a user? Or that a client calling your API might want to ignore some but not all of? Write a Swift function that throws and a corresponding set of ErrorType values, or an ObjC method that returns an NSError out-parameter.
If you are using Swift 2.0 you could use both versions. The second version uses try/catch (introduced with 2.0) and is thus not backwards compatible which might be a disadvantage to consider.
If there is any performance difference then it will be neglectable.
My personal favorite is the first one as it is straight forward and well readable. If I had to do maintenance of the second version I would ask myself why the author took a try/catch approach for such a simple case. So I would rather be confused...
If you had many complex conditions with many exit points (throws) then I would go for the second one. But as I said, this is not the case here.
Related
Googling has led me to general Swift error handling links, but I have a more specific question. I think the answer here is "no, you're out of luck" but I want to double check to see if I'm missing something. This question from a few years ago seems similar and has some answers with gross looking workarounds... I'm looking to see if the latest version of Swift makes something more elegant possible.
Situation: I have a function which is NOT marked with throws, and uses try!.
Goal: I want to create a unit test which verifies that, yep, giving this function the wrong thing will in fact fail and throw a (runtime/fatal) error.
Problems:
When I wrap this function in a do-catch, the compiler warns me that the catch block is unreachable.
When I run the test and pass in the bad arguments, the do-catch does NOT catch the error.
XCTAssertThrows also does not catch the error.
This function is built to have an identical signature to another, silently failing function, which I swap it out for this one on simulators so that I can loudly fail during testing (either automated or manual). So I can't just change this to a throwing function, because then the other function will have to be marked as throwing and I want it to fail silently.
So, is there a way to throw an unhandled error that I can catch in a unit test?
Alternatively, can I make this function blow up in a testable way without changing the signature?
There is no way to catch non-throwing errors in swift and you mean that by using ! after try.
But you can refactor your code in a way you can have more control from outside of the function like this:
Factor out the throwing function, so you can test it in the right way:
func throwingFunc() throws {
let json = "catch me if you can".data(using: .utf8)!
try JSONDecoder().decode(Int.self, from: json)
}
Write a non-throwing wrapper with a custom error handler:
func nonThrowingFunc( catchHandler:((Error)->Void)? = nil ) {
guard let handler = catchHandler else { return try! throwingFunc() }
do {
try throwingFunc()
} catch {
handler(error)
}
}
So the handler will be called only if you are handling it:
// Test the function and faild the test if needed
nonThrowingFunc { error in
XCTFail(error.localizedDescription)
}
And you have the crashing one:
// Crash the program
nonThrowingFunc()
Note
! (as force) is designed for situations that you are pretty sure about the result. good examples:
Decoding hardcoded or static JSON
Force unwrapping hardcoded values
force try interfaces when you know what is the implementation of it at the point
etc.
If your function is not pure enough and may fail by passing different arguments, you should consider NOT forcing it and refactor your code to a safer version.
Swift (until & including current 5.5) postulates explicitly that all errors inside non-throwing function MUST be handled inside(!). So swift by-design has no public mechanism to intervene in this process and generates run-time error.
You might not even like my answer. But here it goes:
Though I agree that there are plenty of cases where try! is more useful (or event better) than try, I would also argue that they are not meant to be tested because they signify programmer mistakes. A programmer mistake is unplanned. You cannot test something unplanned. If you expect (or suspect) a mistake to happen in production, then you should not be using try! in the first place. To me this is a violation of what I think are standards for programming.
Throwable are added to handle expected mistakes and using try! tells the compiler that you expect a mistake will NEVER happen. For example, when you are parsing a hard-coded value that you know will never fail. So why would you ever need to test a mistake that will never happen?
You can also use an assertionFailure if you want to be rigorous in debug but safe in release.
Runtime errors, like
array index out of bounds
forcibly unwrapping nil values
division by zero
, are considered programming errors, and need precondition checks to reduce the chances they happen in production. The precondition failures are usually caught in the testing phase, as QA people toss the application on all sides, trying to find implementation flaws.
As they are programming errors, you should not need to test them, if indeed the execution of the code reached to a point where the assertion fails, then it means the other code failed to provide valid data. And that's what you should unit test, the other code.
It's best if you avoid the need to have the assertions. !, try!, IOU's, all trap for nil values, so better to avoid these constructs if you're not 100% sure that you'll receive only non-nil values.
A programming error most of the times means that your application reached into an unrecoverable state, and there's little it can be done afterwards, so better let it crash then continue with an invalid state.
Thus, Swift doesn't need to expose an API to handle this kind of scenarios. The existing workarounds, are complicated, fragile, and don't worth to be used in real-life applications.
To conclude:
replace the forced unwraps (try! included) with code that can handle nils, and unit test that, or,
unit test the caller code, since that's the actual problematic code.
The latter case assumes that the forced unwrap usage is legitimate, as the code expects for the callee to be non-nil, so the burden is moved on the other piece of code, the provider of the value that's being forcefully unwrapped.
use Generic Function XCTAssertThrowsError in swift for unit testing
Asserts that an expression throws an error.
func XCTAssertThrowsError<T>(_ expression: #autoclosure () throws -> T, _ message: #autoclosure () -> String = "", file: StaticString = #filePath, line: UInt = #line, _ errorHandler: (_ error: Error) -> Void = { _ in })
https://developer.apple.com/documentation/xctest/1500795-xctassertthrowserror
I've read more than a few answers to similar questions as well as a few tutorials, but none address my main confusion. I'm a native Java coder, but I've programmed in Swift as well.
Why would I ever want to use optionals instead of nulls?
I've read that it's so there are less null checks and errors, but these are necessary or easily avoided with clean programming.
I've also read it's so all references succeed (https://softwareengineering.stackexchange.com/a/309137/227611 and val length = text?.length). But I'd argue this is a bad thing or a misnomer. If I call the length function, I expect it to contain a length. If it doesn't, the code should deal with it right there, not continue on.
What am I missing?
Optionals provide clarity of type. An Int stores an actual value - always, whereas an Optional Int (i.e. Int?) stores either the value of an Int or a nil. This explicit "dual" type, so to speak, allows you to craft a simple function that can clearly declare what it will accept and return. If your function is to simply accept an actual Int and return an actual Int, then great.
func foo(x: Int) -> Int
But if your function wants to allow the return value to be nil, and the parameter to be nil, it must do so by explicitly making them optional:
func foo(x: Int?) -> Int?
In other languages such as Objective-C, objects can always be nil instead. Pointers in C++ can be nil, too. And so any object you receive in Obj-C or any pointer you receive in C++ ought to be checked for nil, just in case it's not what your code was expecting (a real object or pointer).
In Swift, the point is that you can declare object types that are non-optional, and thus whatever code you hand those objects to don't need to do any checks. They can just safely just use those objects and know they are non-null. That's part of the power of Swift optionals. And if you receive an optional, you must explicitly unpack it to its value when you need to access its value. Those who code in Swift try to always make their functions and properties non-optional whenever they can, unless they truly have a reason for making them optional.
The other beautiful thing about Swift optionals is all the built-in language constructs for dealing with optionals to make the code faster to write, cleaner to read, more compact... taking a lot of the hassle out of having to check and unpack an optional and the equivalent of that you'd have to do in other languages.
The nil-coalescing operator (??) is a great example, as are if-let and guard and many others.
In summary, optionals encourage and enforce more explicit type-checking in your code - type-checking that's done by by the compiler rather than at runtime. Sure you can write "clean" code in any language, but it's just a lot simpler and more automatic to do so in Swift, thanks in big part to its optionals (and its non-optionals too!).
Avoids error at compile time. So that you don't pass unintentionally nulls.
In Java, any variable can be null. So it becomes a ritual to check for null before using it. While in swift, only optional can be null. So you have to check only optional for a possible null value.
You don't always have to check an optional. You can work equally well on optionals without unwrapping them. Sending a method to optional with null value does not break the code.
There can be more but those are the ones that help a lot.
TL/DR: The null checks that you say can be avoided with clean programming can also be avoided in a much more rigorous way by the compiler. And the null checks that you say are necessary can be enforced in a much more rigorous way by the compiler. Optionals are the type construct that make that possible.
var length = text?.length
This is actually a good example of one way that optionals are useful. If text doesn't have a value, then it can't have a length either. In Objective-C, if text is nil, then any message you send it does nothing and returns 0. That fact was sometimes useful and it made it possible to skip a lot of nil checking, but it could also lead to subtle errors.
On the other hand, many other languages point out the fact that you've sent a message to a nil pointer by helpfully crashing immediately when that code executes. That makes it a little easier to pinpoint the problem during development, but run time errors aren't so great when they happen to users.
Swift takes a different approach: if text doesn't point to something that has a length, then there is no length. An optional isn't a pointer, it's a kind of type that either has a value or doesn't have a value. You might assume that the variable length is an Int, but it's actually an Int?, which is a completely different type.
If I call the length function, I expect it to contain a length. If it doesn't, the code should deal with it right there, not continue on.
If text is nil then there is no object to send the length message to, so length never even gets called and the result is nil. Sometimes that's fine — it makes sense that if there's no text, there can't be a length either. You may not care about that — if you were preparing to draw the characters in text, then the fact that there's no length won't bother you because there's nothing to draw anyway. The optional status of both text and length forces you to deal with the fact that those variables don't have values at the point where you need the values.
Let's look at a slightly more concrete version:
var text : String? = "foo"
var length : Int? = text?.count
Here, text has a value, so length also gets a value, but length is still an optional, so at some point in the future you'll have to check that a value exists before you use it.
var text : String? = nil
var length : Int? = text?.count
In the example above, text is nil, so length also gets nil. Again, you have to deal with the fact that both text and length might not have values before you try to use those values.
var text : String? = "foo"
var length : Int = text.count
Guess what happens here? The compiler says Oh no you don't! because text is an optional, which means that any value you get from it must also be optional. But the code specifies length as a non-optional Int. Having the compiler point out this mistake at compile time is so much nicer than having a user point it out much later.
var text : String? = "foo"
var length : Int = text!.count
Here, the ! tells the compiler that you think you know what you're doing. After all, you just assigned an actual value to text, so it's pretty safe to assume that text is not nil. You might write code like this because you want to allow for the fact that text might later become nil. Don't force-unwrap optionals if you don't know for certain, because...
var text : String? = nil
var length : Int = text!.count
...if text is nil, then you've betrayed the compiler's trust, and you deserve the run time error that you (and your users) get:
error: Execution was interrupted, reason: EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0)
Now, if text is not optional, then life is pretty simple:
var text : String = "foo"
var length : Int = text.count
In this case, you know that text and length are both safe to use without any checking because they cannot possibly be nil. You don't have to be careful to be "clean" -- you literally can't assign anything that's not a valid String to text, and every String has a count, so length will get a value.
Why would I ever want to use optionals instead of nulls?
Back in the old days of Objective-C, we used to manage memory manually. There was a small number of simple rules, and if you followed the rules rigorously, then Objective-C's retain counting system worked very well. But even the best of us would occasionally slip up, and sometimes complex situations arose in which it was hard to know exactly what to do. A huge portion of Objective-C questions on StackOverflow and other forums related to the memory management rules. Then Apple introduced ARC (automatic retain counting), in which the compiler took over responsibility for retaining and releasing objects, and memory management became much simpler. I'll bet fewer than 1% of Objective-C and Swift questions here on SO relate to memory management now.
Optionals are like that: they shift responsibility for keeping track of whether a variable has, doesn't have, or can't possibly not have a value from the programmer to the compiler.
While learning Swift, I have seen many cases of safe code vs. unsafe code. A perfect example of this is
as? and as!
as?, from my understanding tries to type cast, but if it doesn't cast, returns nil.
as!, from my understanding, tries to type cast, and crashes if it fails.
My question, is why does as!, (and other unsafe syntax with safe counterparts, for that matter) exist? Obviously as? is the better choice. The only answer I could come up with is that maybe using as! is faster for the compiler to compile, and therefore is mainly an optimization piece of syntax.
I'm just curious to know I'm right about safe syntax being a bit slower to compile, since performance matters to me a great deal.
Thank you for your time.
This question is based upon a false premise: The primary choice of as? vs as! is not generally one of performance. Sure, as! potentially might be a bit more efficient, but in an optimized build, the difference is generally negligible.
One chooses as! vs as? based upon the context:
If the cast can possibly fail at runtime, then you'd use as?. Of course you add code to check to see whether the cast succeeded or not.
A common example of a cast that could fail would be whenever the source of the data is some external source (e.g. user input or a network request). A user could paste text values into a field in which you were expecting only numeric values. An external web server could be unavailable or it could even return data in an unexpected format. You'd want to use as? to detect these situations and handle it gracefully.
But if the cast can not possibly fail at runtime, you use as!. But you do so less for performance reasons than for reasons of clarity and writing concise code.
A common example of a cast that cannot fail would be when retrieving a cell prototype from a storyboard, where you know the base type was set in Interface Builder, though the Cocoa API cannot know this. It would be ineffiencient to add a lot of code checking to see if the base type was set correctly (especially since this would be a fatal programming error, anyway) and leads to code with a lot of syntactic noise with no benefit. And if you don't do it correctly, you could easily accidentally suppress an error that you really would want to flush out during the testing process, making it harder to find the problem. This is a good use case for as! syntax.
as! is not unsafe unless your code before it has failed to ensure the cast can't fail. Imagine a situation where your code has already verified the object can indeed be cast to a particular protocol. If for whatever reason you find it convenient to not create a new local variable of that type and cast the first variable to it, then using as! can be appropriate.
The same logic goes for implicitly unwrapped optionals. Since when your view controller is instantiated, an IBOutlet property can't be assigned yet (storyboard hasn't loaded yet), the property has to be optional. But it's irksome to have to use the property as an optional throughout the class when you know the storyboard has loaded. In fact, if there is a problem with locating the storyboard element and assigning it, a 100% guaranteed crash is desired so that the developer can always spot and fix the problem.
So returning to as!, this raises a second case of when it's appropriate to use: whenever a crash is the most desirable outcome of the failed cast.
So it's not about compiler speed. It's about code logic convenience. There are appropriate times to use it, and for those times, it's a more compact and easier to read way of representing your logic goals.
As you say, as! should hardly ever be used. There are a few cases, though, where due to legacy issues, the compiler doesn't know what type something should be, even though the developer does. One example of this is the NSCopying protocol. Because Objective-C was not as strict about type safety as Swift, copy() returns an id, which is bridged to Any in Swift. However, when you use copy(), mutableCopy(), and friends, you usually know exactly what type you will get out of it, so as! is typically used to force a cast to the correct type.
let aString: NSAttributedString = ...
let mutableAString = aString.mutableCopy() as! NSMutableAttributedString
You can investigate this question for yourself by compiling something like
func f(a: Any) -> Int {
(a as! Int) + 5
}
vs.
func f(a: Any) -> Int {
((a as? Int) ?? 3) + 5
}
With the Swift 4 compiler, running swiftc -O -emit-assembly on this Swift code produces extremely similar output, which should square with your understanding of how as! works and how we would implement it if it wasn't part of the language:
func forceCast<T>(_ subject: Any) -> T {
return subject as? T ?? fatalError("Could not cast \(subject) to \(T)")
}
It's worth noting, as SmartCat already alluded to, that as! isn't actually "unsafe" in the sense that the Swift compiler authors define that word; because it, like as?, actually specifically checks the type of subject. There is a unsafe-in-the-swift-sense force cast in swift; it's called, unsurprisingly, unsafeBitCast; and is equivalent to a C/Objective-C cast; it doesn't check types and just assumes you knew what you were doing. Will it be faster than as? Without a doubt.
So, to answer your title's question: yes, safety has a cost in Swift. The flip side of the coin is that if you write code that avoids casts and optionality, Swift can (in theory) optimize your code far more aggressively than languages that make less strong safety guarantees.
To answer your actual question; there's no difference between as? and as! in terms of performance.
Other's have already covered why as! exists; it's so you can tell the compiler "I can't explain why to you, but this will always work; trust me".
I am curious as to why Swift language engineers have decided to go with this syntax:
do {
let x = try statement that throws
try a void statement that throws
} catch {
}
vs the more traditional try-catch syntax that seems to be doing exactly the same. Except that in the case of Swift, one needs to type a try for each line that throws an exception.
They want to use try to call out each specific expression that can throw. I imagine the reason for this is that a common complaint about exceptions is that they're 'invisible gotos', where users can't tell what's going to throw without going and looking at the definition of every function they're using. Requiring try on each throwing function call, eliminates this problem.
Personally I don't think 'invisible gotos' is a valid complaint. Far from being unstructured (like goto), exceptions make error handling highly structured. Furthermore, if you're using exceptions correctly, you almost never need to be able to tell at a glance what functions throw. For more info, see http://exceptionsafecode.com, which discusses correct use of exceptions at length.
Given that they want try to be an explicit call out for throwing function calls, they probably didn't want to reuse it for traditional try-block syntax.
They could also have just not used any keyword:
{
let x = try foo()
} catch {
// ...
}
They're also using do to introduce arbitrary nested scopes:
do {
let x = foo()
}
Other languages already use braces without any keyword for this. Presumably they feel having a keyword makes the syntax easier to read or parse or something.
I am writing a web app where exceptions are used to handle error cases. Often, I find myself writing helpers like this:
def someHelper(...) : Boolean {...}
and then using it like this:
if (!someHelper(...)){
throw new SomeException()
}
These exceptions represent things like invalid parameters, and when handled they send a useful error message to the user, eg
try {
...
} catch {
case e: SomeException => "Bad user!"
}
Is this a reasonable approach? And how could I pass the exception into the helper function and have it thrown there? I have had trouble constructing a type for such a function.
I use Either most of the time, not exceptions. I generally use exceptions, as you have done or some similar way, when the control flow has to go way, way back to some distant point, and otherwise there's nothing sensible to do. However, when the exceptions can be handled fairly locally, I will instead
def myMethod(...): Either[String,ValidatedInputForm] = {
...
if (!someHelper(...)) Left("Agree button not checked")
else Right(whateverForm)
}
and then when I call this method, I can
myMethod(blah).fold({ err =>
doSomething(err)
saneReturnValue
}, { form =>
foo(form)
form.usefulField
})
or match on Left(err) vs Right(form), or various other things.
If I don't want to handle the error right there, but instead want to process the return value, I
myMethod(blah).right.map{ form =>
foo(form)
bar(form)
}
and I'll get an Either with the error message unchanged as a Left, if it was an error message, or with the result of { foo(form); bar(form) } as a Right if it was okay. You can also chain your error processing using flatMap, e.g. if you wanted to perform an additional check on so-far-correct values and reject some of them, you could
myMethod(blah).right.flatMap{ form =>
if (!checkSomething(form)) Left("Something didn't check out.")
else Right(form)
}
It's this sort of processing that makes using Either more convenient (and usually better-performing, if exceptions are common) than exceptions, which is why I use them.
(In fact, in very many cases I don't care why something went wrong, only that it went wrong, in which case I just use an Option.)
There's nothing special about passing an exception instance to some method:
def someMethod(e: SomeException) {
throw e
}
someMethod(new SomeException)
But I have to say that I get a very distinct feeling that your whole idea just smells. If you want to validate a user input just write validators, e.g. UserValidator which will have some method like isValid to test a user input and return a boolean, you can implement some messaging there too. Exceptions are really intended for different purposes.
The two most common ways to approach what you're trying to do is to either just have the helper create and throw an exception itself, or exactly what you're doing: have the calling code check the results, and throw a meaningful exception, if needed.
I've never seen a library where you pass in the exception you expect the helper to throw. As I said on another answer, there's a surprisingly substantial cost to simply instantiating an exception, and if you followed this pattern throughout your code you could see an overall performance problem. This could be mitigated through the use of by-name parameters, but if you just forget to put => in a few key functions, you've got a performance problem that's difficult to track down.
At the end of the day, if you want the helper to throw an exception, it makes sense that the helper itself already knows what sort of exception it wants to throw. If I had to choose between A and B:
def helperA(...) { if (stuff) throw new InvalidStuff() }
def helperB(..., onError: => Exception) { if (stuff) throw onError }
I would choose A every time.
Now, if I had to choose between A and what you have now, that's a toss up. It really depends on context, what you're trying to accomplish with the helpers, how else they may be used, etc.
On a final note, naming is very important in these sorts of situations. If your go the return-code-helper route, your helpers should have question names, such as isValid. If you have exception-throwing-helpers, they should have action names, such as validate. Maybe even give it emphasis, like validate_!.
For an alternative approach you could check out scalaz Validators, which give a lot of flexibility for this kind of case (e.g. should I crash on error, accumulate the errors and report at the end or ignore them completely?). A few
examples might help you decide if this is the right approach for you.
If you find it hard to find a way in to the library, this answer gives some pointers to some introductory material; or check out .