I have been struggling to properly implement the ForwardIndexType protocol for an enum, in particular the handling of the end case (i.e for the last item without a successor). This protocol is not really covered in the Swift Language book.
Here is a simple example
enum ThreeWords : Int, ForwardIndexType {
case one=1, two, three
func successor() ->ThreeWords {
return ThreeWords(rawValue:self.rawValue + 1)!
}
}
The successor() function will return the next enumerator value, except for the last element, where it will fail with an exception, because there is no value after .three
The ForwardTypeProtocol does not allow successor() to return a conditional value, so there seems to be no way of signalling that there is no successor.
Now using this in a for loop to iterate over the closed range of all the possible values of an enum, one runs into a problem for the end case:
for word in ThreeWords.one...ThreeWords.three {
print(" \(word.rawValue)")
}
println()
//Crashes with the error:
fatal error: unexpectedly found nil while unwrapping an Optional value
Swift inexplicably calls the successor() function of the end value of the range, before executing the statements in the for loop. If the range is left half open ThreeWords.one..<ThreeWords.three then the code executes correctly, printing 1 2
If I modify the successor function so that it does not try to create a value larger than .three like this
func successor() ->ThreeWords {
if self == .three {
return .three
} else {
return ThreeWords(rawValue:self.rawValue + 1)!
}
}
Then the for loop does not crash, but it also misses the last iteration, printing the same as if the range was half open 1 2
My conclusion is that there is a bug in swift's for loop iteration; it should not call successor() on the end value of a closed range. Secondly, the ForwardIndexType ought to be able to return an optional, to be able to signal that there is no successor for a certain value.
Has anyone had more success with this protocol ?
Indeed, it seems that successor will be called on the last value.
You may wish to file a bug, but to work around this you could simply add a sentinel value to act as a successor.
It seems, ... operator
func ...<Pos : ForwardIndexType>(minimum: Pos, maximum: Pos) -> Range<Pos>
calls maximum.successor(). It constructs Range<T> like
Range(start: minimum, end: maximum.successor())
So, If you want to use enum as Range.Index, you have to define the next of the last value.
enum ThreeWords : Int, ForwardIndexType {
case one=1, two, three
case EXHAUST
func successor() ->ThreeWords {
return ThreeWords(rawValue:self.rawValue + 1) ?? ThreeWords.EXHAUST
}
}
This is an old Question but I would like to sum up some things and post another possible solution.
As #jtbandes and #rintaro already stated a closed range created with the start...end operator is internally created with start..<end.successor()
Afaik this is an intentional behavior in Swift.
In many cases you can also use an Interval where you thought about using a Range or where Swift declared a Range by default. The point here is that intervals aren't collections.
So this is not possible with Intervals
for word in ThreeWords.one...ThreeWords.three {...}
================
For the following I assume the snippet above was just a debug case to cross-check the values.
To declare an Interval you need to explicitly specify the type. Either a HalfOpenInterval (..<) or a ClosedInterval (...)
var interval:ClosedInterval = ThreeWords.one...ThreeWords.four
This requires you to make your enumeration Comparable. Although Int is Comparable already, you still need to add it to the inheritance list
enum ThreeWords : Int, ForwardIndexType, Comparable {
case one=1, two, three, four
func successor() ->ThreeWords {
return ThreeWords(rawValue:self.rawValue + 1)!
}
}
And finally the enumeration need to conform to Comparable. This is a generic approach since your enumeration also conforms to the protocol RawRepresentable
func <<T: RawRepresentable where T.RawValue: Comparable>(lhs: T, rhs: T) -> Bool {
return lhs.rawValue < rhs.rawValue
}
Like I wrote you can't iterate over it in a loop anymore, but you can have a quick cross-check using a switch:
var interval:ClosedInterval = ThreeWords.one...ThreeWords.four
switch(ThreeWords.four) {
case ThreeWords.one...ThreeWords.two:
print("contains one or two")
case let word where interval ~= word:
print("contains: \(word) with raw value: \(word.rawValue)")
default:
print("no case")
}
prints "contains: four with raw value: 4"
Related
I was thinking about how Swift ensures uniqueness for Set because I have turned one of my obj from Equatable to Hashable for free and so I came up with this simple Playground
struct SimpleStruct: Hashable {
let string: String
let number: Int
static func == (lhs: SimpleStruct, rhs: SimpleStruct) -> Bool {
let areEqual = lhs.string == rhs.string
print(lhs, rhs, areEqual)
return areEqual
}
}
var set = Set<SimpleStruct>()
let first = SimpleStruct(string: "a", number: 2)
set.insert(first)
So my first question was:
Will the static func == method be called anytime I insert a new obj inside the set?
My question comes from this thought:
For Equatable obj, in order to make this decision, the only way to ensure two obj are the same is to ask the result of static func ==.
For Hashable obj, a faster way is to compare hashValues... but, like in my case, the default implementation will use both string and number, in contrast with == logic.
So, in order to test how Set behaves, I have just added a print statement.
I have figured out that sometimes I got the print statement, sometimes no. Like sometimes hashValue isn't enough in order to make this decision ... So the method hasn't been called every time.
Weird...
So I've tried to add two objects that are equal and wondering what will be the result of set.contains
let second = SimpleStruct(string: "a", number: 3)
print(first == second) // returns true
set.contains(second)
And wonders of wonders, launching a couple of times the playground, I got different results and this might cause unpredictable results ...
Adding
var hashValue: Int {
return string.hashValue
}
it gets rid of any unexpected results but my doubt is:
Why, without the custom hashValue implementation, == sometimes gets called and sometimes it doesn't?
Should Apple avoid this kind of unexpected behaviours?
The synthesized implementation of the Hashable requirement uses all stored
properties of a struct, in your case string and number. Your implementation
of == is only based on the string:
let first = SimpleStruct(string: "a", number: 2)
let second = SimpleStruct(string: "a", number: 3)
print(first == second) // true
print(first.hashValue == second.hashValue) // false
This is a violation of a requirement of the Hashable protocol:
Two instances that are equal must feed the same values to Hasher in hash(into:), in the same order.
and causes the undefined behavior. (And since hash values are randomized
since Swift 4.2, the behavior can be different in each program run.)
What probably happens in your test is that the hash value of second is used to determine the “bucket” of the set in which the value
would be stored. That may or may not be the same bucket in which first is stored. – But that is an implementation detail: Undefined behavior is undefined behavior, it can cause unexpected results or even
runtime errors.
Implementing
var hashValue: Int {
return string.hashValue
}
or alternatively (starting with Swift 4.2)
func hash(into hasher: inout Hasher) {
hasher.combine(string)
}
fixes the rule violation, and therefore makes your code behave as expected.
Swift 3.
Ultimately my functions need to receive UInt8 data types, but I'm never sure if the arguments I will receive from callers will be Int, Int64, UInt, Float, etc. I know they will be numeric types, I just don't know which flavor.
I could do:
func foo(value: Int) { }
func foo(value: Float) {}
func foo(value: UInt) {}
But that's crazy. So I thought I could do something like create a protocol
protocol ValidEncodable {
}
And then pass in types that conform:
func foo(value: ValidEncodable) { }
And then in that function I could get my values into the correct format
func foo(value: ValidEncoable) -> UInt8 {
let correctedValue = min(max(floor(value), 0), 100)
return UInt8(correctedValue)
}
I'm really struggling to figure out
1) How to create this ValidEncodable protocol that contains all the numeric types
2) And how to do things like floor(value) when the value I get is an Int without iterating over every possible numeric type (i.e. (floor(x) is only available on floating-point types)
Ultimately I need the values to be UInt8 in the range of 0-100. The whole reason for this madness is that I'm parsing XML files to my own internal data structures and I want to bake in some validation to my values.
This can be done without a protocol, and by making use of compiler checks, which greatly reduces the changes of bugs.
My recommendation is to use a partial function - i.e. a function that instead of taking an int, it takes an already validated value. Check this article for a more in-depth description of why partial functions are great.
You can build a Int0to100 struct, which has either a failable or throwable initializer (depending on taste):
struct Int0to100 {
let value: UInt8
init?(_ anInt: Int) {
guard anInt >= 0 && anInt <= 100 else { return nil }
value = UInt8(anInt)
}
init?(_ aFloat: Float) {
let flooredValue = floor(aFloat)
guard flooredValue >= 0 && flooredValue <= 100 else { return nil }
value = UInt8(flooredValue)
}
// ... another initializers can be added the same way
}
and change foo to allow to be called with this argument instead:
func foo(value: Int0to100) {
// do your stuff here, you know for sure, at compile time,
// that the function can be called with a valid value only
}
You move to the caller the responsibility of validating the integer value, however the validation resolves to checking an optional, which is easy, and allows you to handle the scenario of an invalid number with minimal effort.
Another important aspect is that you explicitly declare the domain of the foo function, which improves the overall design of the code.
And not last, enforcements set at compile time greatly reduce the potential of having runtime issues.
If you know your incoming values will lie in 0..<256, then you can just construct a UInt8 and pass it to the function.
func foo(value: UInt8) -> UInt8 { return value }
let number = arbitraryNumber()
print(foo(UInt8(number)))
This will throw a runtime exception if the value is too large to fit in a byte, but otherwise will work. You could protect against this type of error by doing some bounds checking between the second and third lines.
Question:
When attempting to stride over String.CharacterView.Index indices by e.g. a stride of 2
extension String.CharacterView.Index : Strideable { }
let str = "01234"
for _ in str.startIndex.stride(to: str.endIndex, by: 2) { } // fatal error
I get the following runtime exception
fatal error: cannot increment endIndex
Just creating the StrideTo<String.CharacterView.Index> above, however, (let foo = str.startIndex.stride(to: str.endIndex, by: 2)) does not yield an error, only when attempting to stride/iterate over or operate on it (.next()?).
What is the reason for this runtime exception; is it expected (mis-use of conformance to Stridable)?
I'm using Swift 2.2 and Xcode 7.3. Details follow below.
Edit addition: error source located
Upon reading my question carefully, it would seem as if the error really does occur in the next() method of StrideToGenerator (see bottom of this post), specifically at the following marked line
let ret = current
current += stride // <-- here
return ret
Even if the last update of current will never be returned (in next call to next()), the final advance of current index to a value larger or equal to that of _end yields the specific runtime error above (for Index type String.CharacterView.Index).
(0..<4).startIndex.advancedBy(4) // OK, -> 4
"foo".startIndex.advancedBy(4) // fatal error: cannot increment endIndex
However, one question still remains:
Is this a bug in the next() method of StrideToGenerator, or just an error that pops up due to a mis-use of String.CharacterView.Index conformance to Stridable?
Related
The following Q&A is related to the subject of a iterating over characters in steps other than +1, and worth including in this question even if the two questions differ.
Using String.CharacterView.Index.successor() in for statements
Especially note #Sulthan:s neat solution in the thread above.
Details
(Apologies for hefty details/investigations of my own, just skip these sections if you can answer my question without the details herein)
The String.CharacterView.Index type describes a character position, and:
conforms to Comparable (and in so, Equatable),
contains implementations for advancedBy(_:) and distanceTo(_:).
Hence, it can directly be made to conform to the protocol Strideable, making use of Stridable:s default implementations of methods stride(through:by:) and stride(to:by:). The examples below will focus on the latter (analogous problems with the former):
...
func stride(to end: Self, by stride: Self.Stride) -> StrideTo<Self>
Returns the sequence of values (self, self + stride, self + stride +
stride, ... last) where last is the last value in the progression
that is less than end.
Conforming to Stridable and striding by 1: all good
Extending String.CharacterView.Index to Stridable and striding by 1 works fine:
extension String.CharacterView.Index : Strideable { }
var str = "0123"
// stride by 1: all good
str.startIndex.stride(to: str.endIndex, by: 1).forEach {
print($0,str.characters[$0])
} /* 0 0
1 1
2 2
3 3 */
For an even number of indices in str above (indices 0..<4), this also works for a stride of 2:
// stride by 2: OK for even number of characters in str.
str.startIndex.stride(to: str.endIndex, by: 2).forEach {
print($0,str.characters[$0])
} /* 0 0
2 2 */
However, for some cases of striding by >1: runtime exception
For an odd number of indices and a stride of 2, however, the stride over the character views indices yield a runtime error
// stride by 2: fatal error for odd number of characters in str.
str = "01234"
str.startIndex.stride(to: str.endIndex, by: 2).forEach {
print($0,str.characters[$0])
} /* 0 0
2 2
fatal error: cannot increment endIndex */
Investigations of my own
My own investigations into this made me suspect the error comes from the next() method of the StrideToGenerator structure, possibly when this method calls += on the stridable element
public func += <T : Strideable>(inout lhs: T, rhs: T.Stride) {
lhs = lhs.advancedBy(rhs)
}
(from a version of the Swift source for swift/stdlib/public/core/Stride.swift that somewhat corresponds to Swift 2.2). Given the following Q&A:s
Trim end off of string in swift, getting error at runtime,
Swift distance() method throws fatal error: can not increment endIndex,
we could suspect that we would possibly need to use String.CharacterView.Index.advancedBy(_:limit:) rather than ...advancedBy(_:) above. However from what I can see, the next() method in StrideToGenerator guards against advancing the index past the limit.
Edit addition: the source of the error seems to indeed be located in the next() method in StrideToGenerator:
// ... in StrideToGenerator
public mutating func next() -> Element? {
if stride > 0 ? current >= end : current <= end {
return nil
}
let ret = current
current += stride /* <-- will increase current to larger or equal to end
if stride is large enough (even if this last current
will never be returned in next call to next()) */
return ret
}
Even if the last update of current will never be returned (in next call to next()), the final advance of current index to a value larger or equal to that of end yields the specific runtime error above, for Index type String.CharacterView.Index.
(0..<4).startIndex.advancedBy(4) // OK, -> 4
"foo".startIndex.advancedBy(4) // fatal error: cannot increment endIndex
Is this to be considered a bug, or is String.CharacterView.Index simply not intended to be (directly) conformed to Stridable?
Simply declaring the protocol conformance
extension String.CharacterView.Index : Strideable { }
compiles because String.CharacterView.Index conforms to
BidirectionalIndexType , and ForwardIndexType/BidirectionalIndexType have default method implementations for advancedBy() and distanceTo()
as required by Strideable.
Strideable has the default protocol method implementation
for stride():
extension Strideable {
// ...
public func stride(to end: Self, by stride: Self.Stride) -> StrideTo<Self>
}
So the only methods which are "directly" implemented for
String.CharacterView.Index are – as far as I can see - the successor() and predecessor() methods from BidirectionalIndexType.
As you already figured out, the default method implementation of
stride() does not work well with String.CharacterView.Index.
But is is always possible to define dedicated methods for a concrete type. For the problems of making String.CharacterView.Index conform to Strideable see
Vatsal Manot's answer below and the discussion in the comments – it took me a while to get what he meant :)
Here is a possible implementation of a stride(to:by:) method for String.CharacterView.Index:
extension String.CharacterView.Index {
typealias Index = String.CharacterView.Index
func stride(to end: Index, by stride: Int) -> AnySequence<Index> {
precondition(stride != 0, "stride size must not be zero")
return AnySequence { () -> AnyGenerator<Index> in
var current = self
return AnyGenerator {
if stride > 0 ? current >= end : current <= end {
return nil
}
defer {
current = current.advancedBy(stride, limit: end)
}
return current
}
}
}
}
This seems to work as expected:
let str = "01234"
str.startIndex.stride(to: str.endIndex, by: 2).forEach {
print($0,str.characters[$0])
}
Output
0 0
2 2
4 4
To simply answer your ending question: this is not a bug. This is normal behavior.
String.CharacterView.Index can never exceed the endIndex of the parent construct (i.e. the character view), and thus triggers a runtime error when forced to (as correctly noted in the latter part of your answer). This is by design.
The only solution is to write your own alternative to the stride(to:by:), one that avoids equalling or exceeding the endIndex in any way.
As you know already, you can technically implement Strideable, but you cannot prevent that error. And since stride(to:by:) is not blueprinted within the protocol itself but introduced in an extension, there is no way you can use a "custom" stride(to:by:) in a generic scope (i.e. <T: Strideable> etc.). Which means you should probably not try and implement it unless you are absolutely sure that there is no way that error can occur; something which seems impossible.
Solution: There isn't one, currently. However, if you feel that this is an issue, I encourage you to start a thread in the swift-evolution mailing list, where this topic would be best received.
This isn't really an answer; it's just that your question got me playing around. Let's ignore Stridable and just try striding through a character view:
let str = "01234"
var i = str.startIndex
// i = i.advancedBy(1)
let inc = 2
while true {
print(str.characters[i])
if i.distanceTo(str.endIndex) > inc {
i = i.advancedBy(inc)
} else {
break
}
}
As you can see, it is crucial to test with distanceTo before we call advancedBy. Otherwise, we risk attempting to advance right through the end index and we'll get the "fatal error: can not increment endIndex" bomb.
So my thought is that something like this must be necessary in order to make the indices of a character view stridable.
I will first explain what I'm trying to do and how I got to where I got stuck before getting to the question.
As a learning exercise for myself, I took some problems that I had already solved in Objective-C to see how I can solve them differently with Swift. The specific case that I got stuck on is a small piece that captures a value before and after it changes and interpolates between the two to create keyframes for an animation.
For this I had an object Capture with properties for the object, the key path and two id properties for the values before and after. Later, when interpolating the captured values I made sure that they could be interpolated by wrapping each of them in a Value class that used a class cluster to return an appropriate class depending on the type of value it wrapped, or nil for types that wasn't supported.
This works, and I am able to make it work in Swift as well following the same pattern, but it doesn't feel Swift like.
What worked
Instead of wrapping the captured values as a way of enabling interpolation, I created a Mixable protocol that the types could conform to and used a protocol extension for when the type supported the necessary basic arithmetic:
protocol SimpleArithmeticType {
func +(lhs: Self, right: Self) -> Self
func *(lhs: Self, amount: Double) -> Self
}
protocol Mixable {
func mix(with other: Self, by amount: Double) -> Self
}
extension Mixable where Self: SimpleArithmeticType {
func mix(with other: Self, by amount: Double) -> Self {
return self * (1.0 - amount) + other * amount
}
}
This part worked really well and enforced homogeneous mixing (that a type could only be mixed with its own type), which wasn't enforced in the Objective-C implementation.
Where I got stuck
The next logical step, and this is where I got stuck, seemed to be to make each Capture instance (now a struct) hold two variables of the same mixable type instead of two AnyObject. I also changed the initializer argument from being an object and a key path to being a closure that returns an object ()->T
struct Capture<T: Mixable> {
typealias Evaluation = () -> T
let eval: Evaluation
let before: T
var after: T {
return eval()
}
init(eval: Evaluation) {
self.eval = eval
self.before = eval()
}
}
This works when the type can be inferred, for example:
let captureInt = Capture {
return 3.0
}
// > Capture<Double>
but not with key value coding, which return AnyObject:\
let captureAnyObject = Capture {
return myObject.valueForKeyPath("opacity")!
}
error: cannot invoke initializer for type 'Capture' with an argument list of type '(() -> _)'
AnyObject does not conform to the Mixable protocol, so I can understand why this doesn't work. But I can check what type the object really is, and since I'm only covering a handful of mixable types, I though I could cover all the cases and return the correct type of Capture. Too see if this could even work I made an even simpler example
A simpler example
struct Foo<T> {
let x: T
init(eval: ()->T) {
x = eval()
}
}
which works when type inference is guaranteed:
let fooInt = Foo {
return 3
}
// > Foo<Int>
let fooDouble = Foo {
return 3.0
}
// > Foo<Double>
But not when the closure can return different types
let condition = true
let foo = Foo {
if condition {
return 3
} else {
return 3.0
}
}
error: cannot invoke initializer for type 'Foo' with an argument list of type '(() -> _)'
I'm not even able to define such a closure on its own.
let condition = true // as simple as it could be
let evaluation = {
if condition {
return 3
} else {
return 3.0
}
}
error: unable to infer closure type in the current context
My Question
Is this something that can be done at all? Can a condition be used to determine the type of a generic? Or is there another way to hold two variables of the same type, where the type was decided based on a condition?
Edit
What I really want is to:
capture the values before and after a change and save the pair (old + new) for later (a heterogeneous collection of homogeneous pairs).
go through all the collected values and get rid of the ones that can't be interpolated (unless this step could be integrated with the collection step)
interpolate each homogeneous pair individually (mixing old + new).
But it seems like this direction is a dead end when it comes to solving that problem. I'll have to take a couple of steps back and try a different approach (and probably ask a different question if I get stuck again).
As discussed on Twitter, the type must be known at compile time. Nevertheless, for the simple example at the end of the question you could just explicitly type
let evaluation: Foo<Double> = { ... }
and it would work.
So in the case of Capture and valueForKeyPath: IMHO you should cast (either safely or with a forced cast) the value to the Mixable type you expect the value to be and it should work fine. Afterall, I'm not sure valueForKeyPath: is supposed to return different types depending on a condition.
What is the exact case where you would like to return 2 totally different types (that can't be implicitly casted as in the simple case of Int and Double above) in the same evaluation closure?
in my full example I also have cases for CGPoint, CGSize, CGRect, CATransform3D
The limitations are just as you have stated, because of Swift's strict typing. All types must be definitely known at compile time, and each thing can be of only one type - even a generic (it is resolved by the way it is called at compile time). Thus, the only thing you can do is turn your type into into an umbrella type that is much more like Objective-C itself:
let condition = true
let evaluation = {
() -> NSObject in // *
if condition {
return 3
} else {
return NSValue(CGPoint:CGPointMake(0,1))
}
}
Let's say I have function which returns optional. nil if error and value if success:
func foo() -> Bar? { ... }
I can use following code to work with this function:
let fooResultOpt = foo()
if let fooResult = fooResultOpt {
// continue correct operations here
} else {
// handle error
}
However there are few problems with this approach for any non-trivial code:
Error handling performed in the end and it's easy to miss something. It's much better, when error handling code follows function call.
Correct operations code is indented by one level. If we have another function to call, we have to indent one more time.
With C one usually could write something like this:
Bar *fooResult = foo();
if (fooResult == null) {
// handle error and return
}
// continue correct operations here
I found two ways to achieve similar code style with Swift, but I don't like either.
let fooResultOpt = foo()
if fooResult == nil {
// handle error and return
}
// use fooResultOpt! from here
let fooResult = fooResultOpt! // or define another variable
If I'll write "!" everywhere, it just looks bad for my taste. I could introduce another variable, but that doesn't look good either. Ideally I would like to see the following:
if !let fooResult = foo() {
// handle error and return
}
// fooResult has Bar type and can be used in the top level
Did I miss something in the specification or is there some another way to write good looking Swift code?
Your assumptions are correct—there isn't a "negated if-let" syntax in Swift.
I suspect one reason for that might be grammar integrity. Throughout Swift (and commonly in other C-inspired languages), if you have a statement that can bind local symbols (i.e. name new variables and give them values) and that can have a block body (e.g. if, while, for), those bindings are scoped to said block. Letting a block statement bind symbols to its enclosing scope instead would be inconsistent.
It's still a reasonable thing to think about, though — I'd recommend filing a bug and seeing what Apple does about it.
This is what pattern matching is all about, and is the tool meant for this job:
let x: String? = "Yes"
switch x {
case .Some(let value):
println("I have a value: \(value)")
case .None:
println("I'm empty")
}
The if-let form is just a convenience for when you don't need both legs.
If what you are writing is a set of functions performing the same sequence of transformation, such as when processing a result returned by a REST call (check for response not nil, check status, check for app/server error, parse response, etc.), what I would do is create a pipeline that at each steps transforms the input data, and at the end returns either nil or a transformed result of a certain type.
I chose the >>> custom operator, that visually indicates the data flow, but of course feel free to choose your own:
infix operator >>> { associativity left }
func >>> <T, V> (params: T?, next: T -> V?) -> V? {
if let params = params {
return next(params)
}
return nil
}
The operator is a function that receives as input a value of a certain type, and a closure that transforms the value into a value of another type. If the value is not nil, the function invokes the closure, passing the value, and returns its return value. If the value is nil, then the operator returns nil.
An example is probably needed, so let's suppose I have an array of integers, and I want to perform the following operations in sequence:
sum all elements of the array
calculate the power of 2
divide by 5 and return the integer part and the remainder
sum the above 2 numbers together
These are the 4 functions:
func sumArray(array: [Int]?) -> Int? {
if let array = array {
return array.reduce(0, combine: +)
}
return nil
}
func powerOf2(num: Int?) -> Int? {
if let num = num {
return num * num
}
return nil
}
func module5(num: Int?) -> (Int, Int)? {
if let num = num {
return (num / 5, num % 5)
}
return nil
}
func sum(params: (num1: Int, num2: Int)?) -> Int? {
if let params = params {
return params.num1 + params.num2
}
return nil
}
and this is how I would use:
let res: Int? = [1, 2, 3] >>> sumArray >>> powerOf2 >>> module5 >>> sum
The result of this expression is either nil or a value of the type as defined in the last function of the pipeline, which in the above example is an Int.
If you need to do better error handling, you can define an enum like this:
enum Result<T> {
case Value(T)
case Error(MyErrorType)
}
and replace all optionals in the above functions with Result<T>, returning Result.Error() instead of nil.
I've found a way that looks better than alternatives, but it uses language features in unrecommended way.
Example using code from the question:
let fooResult: Bar! = foo();
if fooResult == nil {
// handle error and return
}
// continue correct operations here
fooResult might be used as normal variable and it's not needed to use "?" or "!" suffixes.
Apple documentation says:
Implicitly unwrapped optionals are useful when an optional’s value is confirmed to exist immediately after the optional is first defined and can definitely be assumed to exist at every point thereafter. The primary use of implicitly unwrapped optionals in Swift is during class initialization, as described in Unowned References and Implicitly Unwrapped Optional Properties.
How about the following:
func foo(i:Int) ->Int? {
switch i {
case 0: return 0
case 1: return 1
default: return nil
}
}
var error:Int {
println("Error")
return 99
}
for i in 0...2 {
var bob:Int = foo(i) ?? error
println("\(i) produces \(bob)")
}
Results in the following output:
0 produces 0
1 produces 1
Error
2 produces 99