Proper way to handle a fail to init - swift

I am looking for a proper way to handle a invalid argument during a initialization.
I am unsure how to do it using Swift as the init has't a return type. How can I tell whoever is trying to initialize this class that you are doing something wrong?
init (timeInterval: Int) {
if timeInterval > 0
self.timeInterval = timeInterval
else
//???? (return false?)
}
Thank you!

Use a failable initializer. Such an initializer looks very similar to a regular designated initializer, but has a '?' character right after init and is allowed to return nil. A failable initializer creates an optional value.
struct Animal {
let species: String
init?(species: String) {
if species.isEmpty { return nil }
self.species = species
}
}
See Apple's documentation on failable initializers for more detail.

In swift, you can't really abort a task half way through execution. There are no exceptions in swift and in general the philosophy is that aborting a task is dangerous and leads to bugs, so it just should't be done.
So, you verify a value like this:
assert(timeInterval > 0)
Which will terminate the program if an invalid value is provided.
You should also change timeInterval to be a UInt so that there will be a compiler error if anybody tries to give a < 0 value or an integer value that could be < 0.
It's probably not the answer you're looking for. But the goal is to check for bad parameters as early as possible, and that means doing it before you create any objects with those parameters. Ideally the check should be done at compile time but that doesn't always work.

I think this is the best solution, took it from:How should I handle parameter validation Swift
class NumberLessThanTen {
var mySmallNumber: Int?
class func instanceOrNil(number: Int) -> NumberLessThanTen? {
if number < 10 {
return NumberLessThanTen(number: number)
} else {
return nil
}
}
#required init() {
}
init(number: Int) {
self.mySmallNumber = number
}
}
let iv = NumberLessThanTen.instanceOrNil(17) // nil
let v = NumberLessThanTen.instanceOrNil(5) // valid instance
let n = v!.mySmallNumber // Some 5

In the Swift book by Apple, at the very bottom of this section:https://developer.apple.com/library/prerelease/ios/documentation/swift/conceptual/swift_programming_language/TheBasics.html#//apple_ref/doc/uid/TP40014097-CH5-XID_399
They say:
When to Use Assertions
Use an assertion whenever a condition has the potential to be false,
but must definitely be true in order for your code to continue
execution. Suitable scenarios for an assertion check include:
An integer subscript index is passed to a custom subscript
implementation, but the subscript index value could be too low or too
high. A value is passed to a function, but an invalid value means that
the function cannot fulfill its task. An optional value is currently
nil, but a non-nil value is essential for subsequent code to execute
successfully.
This sounds exactly like your situation!
Thus your code should look like:
init (timeInterval: Int) {
assert (timeInterval > 0, "Time Interval Must be a positive integer")
// Continue your execution normally
}

Related

Confused about Optional vs Default value -1

We are working in a Swift project. A function was there like,
fun getSleepAmmount() -> Int {
// calculate sleep time
// return value when valid
// else
return -1
}
My team member prefers the above function where caller needs to check with -1 which is not I am comfortable with. My suggestion is to redesign with nil return (although callers still need to check nullability) like,
fun getSleepAmmount() -> Int? {
// calculate sleep time
// return value when valid
// else
return nil
}
But my colleagues do not want to redesign. Which version of the functions is cleaner and why?
Obviously, nil is much cleaner. Because of -1 means nothing. This is just a magic word. It is difficult to support, refactor and handle this case.
Returning nil is a better solution instead of using any garbage or default value.
Returning any default value may in future clash with the actual result.
Also, there might be other developers dealing with the same code. So, using nil will have a better explanation than using -1.
Second is the better as youll do
if let v = getSleepAmmount() {}
But with First
let v = getSleepAmmount()
if v > 0 {}
Returning nil means this isn't a valid return while -1 may mean another thing that will be miss-understood by a new developer that checks the code
If there is code in the caller that should only run if there is a valid sleep amount, then optionals is the better and clearer way to go. This is exactly what guard let and if let are designed for:
guard let sleepAmount = getSleepAmount() { else return }
// do something with sleepAmount
An even better way would be to throw an error inside the function:
func getSleepAmmount() throws -> Int {
// calculate sleep time
// return when valid
// else
throw InvalidSleepAmount
}
Then
do {
let sleepAmount = try getSleepAmount()
// do something with it
} catch InvalidSleepAmount {
// error processing
}
(If you want, your function could throw different errors so the caller gets to know why the sleep amount is invalid, SleepTooShort, SleepTooLong, NoSleep etc)

Swift Generic "Numeric" Protocol that Validates Numeric Values

Swift 3.
Ultimately my functions need to receive UInt8 data types, but I'm never sure if the arguments I will receive from callers will be Int, Int64, UInt, Float, etc. I know they will be numeric types, I just don't know which flavor.
I could do:
func foo(value: Int) { }
func foo(value: Float) {}
func foo(value: UInt) {}
But that's crazy. So I thought I could do something like create a protocol
protocol ValidEncodable {
}
And then pass in types that conform:
func foo(value: ValidEncodable) { }
And then in that function I could get my values into the correct format
func foo(value: ValidEncoable) -> UInt8 {
let correctedValue = min(max(floor(value), 0), 100)
return UInt8(correctedValue)
}
I'm really struggling to figure out
1) How to create this ValidEncodable protocol that contains all the numeric types
2) And how to do things like floor(value) when the value I get is an Int without iterating over every possible numeric type (i.e. (floor(x) is only available on floating-point types)
Ultimately I need the values to be UInt8 in the range of 0-100. The whole reason for this madness is that I'm parsing XML files to my own internal data structures and I want to bake in some validation to my values.
This can be done without a protocol, and by making use of compiler checks, which greatly reduces the changes of bugs.
My recommendation is to use a partial function - i.e. a function that instead of taking an int, it takes an already validated value. Check this article for a more in-depth description of why partial functions are great.
You can build a Int0to100 struct, which has either a failable or throwable initializer (depending on taste):
struct Int0to100 {
let value: UInt8
init?(_ anInt: Int) {
guard anInt >= 0 && anInt <= 100 else { return nil }
value = UInt8(anInt)
}
init?(_ aFloat: Float) {
let flooredValue = floor(aFloat)
guard flooredValue >= 0 && flooredValue <= 100 else { return nil }
value = UInt8(flooredValue)
}
// ... another initializers can be added the same way
}
and change foo to allow to be called with this argument instead:
func foo(value: Int0to100) {
// do your stuff here, you know for sure, at compile time,
// that the function can be called with a valid value only
}
You move to the caller the responsibility of validating the integer value, however the validation resolves to checking an optional, which is easy, and allows you to handle the scenario of an invalid number with minimal effort.
Another important aspect is that you explicitly declare the domain of the foo function, which improves the overall design of the code.
And not last, enforcements set at compile time greatly reduce the potential of having runtime issues.
If you know your incoming values will lie in 0..<256, then you can just construct a UInt8 and pass it to the function.
func foo(value: UInt8) -> UInt8 { return value }
let number = arbitraryNumber()
print(foo(UInt8(number)))
This will throw a runtime exception if the value is too large to fit in a byte, but otherwise will work. You could protect against this type of error by doing some bounds checking between the second and third lines.

Can a condition be used to determine the type of a generic?

I will first explain what I'm trying to do and how I got to where I got stuck before getting to the question.
As a learning exercise for myself, I took some problems that I had already solved in Objective-C to see how I can solve them differently with Swift. The specific case that I got stuck on is a small piece that captures a value before and after it changes and interpolates between the two to create keyframes for an animation.
For this I had an object Capture with properties for the object, the key path and two id properties for the values before and after. Later, when interpolating the captured values I made sure that they could be interpolated by wrapping each of them in a Value class that used a class cluster to return an appropriate class depending on the type of value it wrapped, or nil for types that wasn't supported.
This works, and I am able to make it work in Swift as well following the same pattern, but it doesn't feel Swift like.
What worked
Instead of wrapping the captured values as a way of enabling interpolation, I created a Mixable protocol that the types could conform to and used a protocol extension for when the type supported the necessary basic arithmetic:
protocol SimpleArithmeticType {
func +(lhs: Self, right: Self) -> Self
func *(lhs: Self, amount: Double) -> Self
}
protocol Mixable {
func mix(with other: Self, by amount: Double) -> Self
}
extension Mixable where Self: SimpleArithmeticType {
func mix(with other: Self, by amount: Double) -> Self {
return self * (1.0 - amount) + other * amount
}
}
This part worked really well and enforced homogeneous mixing (that a type could only be mixed with its own type), which wasn't enforced in the Objective-C implementation.
Where I got stuck
The next logical step, and this is where I got stuck, seemed to be to make each Capture instance (now a struct) hold two variables of the same mixable type instead of two AnyObject. I also changed the initializer argument from being an object and a key path to being a closure that returns an object ()->T
struct Capture<T: Mixable> {
typealias Evaluation = () -> T
let eval: Evaluation
let before: T
var after: T {
return eval()
}
init(eval: Evaluation) {
self.eval = eval
self.before = eval()
}
}
This works when the type can be inferred, for example:
let captureInt = Capture {
return 3.0
}
// > Capture<Double>
but not with key value coding, which return AnyObject:\
let captureAnyObject = Capture {
return myObject.valueForKeyPath("opacity")!
}
error: cannot invoke initializer for type 'Capture' with an argument list of type '(() -> _)'
AnyObject does not conform to the Mixable protocol, so I can understand why this doesn't work. But I can check what type the object really is, and since I'm only covering a handful of mixable types, I though I could cover all the cases and return the correct type of Capture. Too see if this could even work I made an even simpler example
A simpler example
struct Foo<T> {
let x: T
init(eval: ()->T) {
x = eval()
}
}
which works when type inference is guaranteed:
let fooInt = Foo {
return 3
}
// > Foo<Int>
let fooDouble = Foo {
return 3.0
}
// > Foo<Double>
But not when the closure can return different types
let condition = true
let foo = Foo {
if condition {
return 3
} else {
return 3.0
}
}
error: cannot invoke initializer for type 'Foo' with an argument list of type '(() -> _)'
I'm not even able to define such a closure on its own.
let condition = true // as simple as it could be
let evaluation = {
if condition {
return 3
} else {
return 3.0
}
}
error: unable to infer closure type in the current context
My Question
Is this something that can be done at all? Can a condition be used to determine the type of a generic? Or is there another way to hold two variables of the same type, where the type was decided based on a condition?
Edit
What I really want is to:
capture the values before and after a change and save the pair (old + new) for later (a heterogeneous collection of homogeneous pairs).
go through all the collected values and get rid of the ones that can't be interpolated (unless this step could be integrated with the collection step)
interpolate each homogeneous pair individually (mixing old + new).
But it seems like this direction is a dead end when it comes to solving that problem. I'll have to take a couple of steps back and try a different approach (and probably ask a different question if I get stuck again).
As discussed on Twitter, the type must be known at compile time. Nevertheless, for the simple example at the end of the question you could just explicitly type
let evaluation: Foo<Double> = { ... }
and it would work.
So in the case of Capture and valueForKeyPath: IMHO you should cast (either safely or with a forced cast) the value to the Mixable type you expect the value to be and it should work fine. Afterall, I'm not sure valueForKeyPath: is supposed to return different types depending on a condition.
What is the exact case where you would like to return 2 totally different types (that can't be implicitly casted as in the simple case of Int and Double above) in the same evaluation closure?
in my full example I also have cases for CGPoint, CGSize, CGRect, CATransform3D
The limitations are just as you have stated, because of Swift's strict typing. All types must be definitely known at compile time, and each thing can be of only one type - even a generic (it is resolved by the way it is called at compile time). Thus, the only thing you can do is turn your type into into an umbrella type that is much more like Objective-C itself:
let condition = true
let evaluation = {
() -> NSObject in // *
if condition {
return 3
} else {
return NSValue(CGPoint:CGPointMake(0,1))
}
}

How to handle initial nil value for reduce functions

I would like to learn and use more functional programming in Swift. So, I've been trying various things in playground. I don't understand Reduce, though. The basic textbook examples work, but I can't get my head around this problem.
I have an array of strings called "toDoItems". I would like to get the longest string in this array. What is the best practice for handling the initial nil value in such cases? I think this probably happens often. I thought of writing a custom function and use it.
func optionalMax(maxSofar: Int?, newElement: Int) -> Int {
if let definiteMaxSofar = maxSofar {
return max(definiteMaxSofar, newElement)
}
return newElement
}
// Just testing - nums is an array of Ints. Works.
var maxValueOfInts = nums.reduce(0) { optionalMax($0, $1) }
// ERROR: cannot invoke 'reduce' with an argument list of type ‘(nil, (_,_)->_)'
var longestOfStrings = toDoItems.reduce(nil) { optionalMax(count($0), count($1)) }
It might just be that Swift does not automatically infer the type of your initial value. Try making it clear by explicitly declaring it:
var longestOfStrings = toDoItems.reduce(nil as Int?) { optionalMax($0, count($1)) }
By the way notice that I do not count on $0 (your accumulator) since it is not a String but an optional Int Int?
Generally to avoid confusion reading the code later, I explicitly label the accumulator as a and the element coming in from the serie as x:
var longestOfStrings = toDoItems.reduce(nil as Int?) { a, x in optionalMax(a, count(x)) }
This way should be clearer than $0 and $1 in code when the accumulator or the single element are used.
Hope this helps
Initialise it with an empty string "" rather than nil. Or you could even initialise it with the first element of the array, but an empty string seems better.
Second go at this after writing some wrong code, this will return the longest string if you are happy with an empty string being returned for an empty array:
toDoItems.reduce("") { count($0) > count($1) ? $0 : $1 }
Or if you want nil, use
toDoItems.reduce(nil as String?) { count($0!) > count($1) ? $0 : $1 }
The problem is that the compiler cannot infer the types you are using for your seed and accumulator closure if you seed with nil, and you also need to get the optional type correct when using the optional string as $0.

swift, optional unwrapping, reversing if condition

Let's say I have function which returns optional. nil if error and value if success:
func foo() -> Bar? { ... }
I can use following code to work with this function:
let fooResultOpt = foo()
if let fooResult = fooResultOpt {
// continue correct operations here
} else {
// handle error
}
However there are few problems with this approach for any non-trivial code:
Error handling performed in the end and it's easy to miss something. It's much better, when error handling code follows function call.
Correct operations code is indented by one level. If we have another function to call, we have to indent one more time.
With C one usually could write something like this:
Bar *fooResult = foo();
if (fooResult == null) {
// handle error and return
}
// continue correct operations here
I found two ways to achieve similar code style with Swift, but I don't like either.
let fooResultOpt = foo()
if fooResult == nil {
// handle error and return
}
// use fooResultOpt! from here
let fooResult = fooResultOpt! // or define another variable
If I'll write "!" everywhere, it just looks bad for my taste. I could introduce another variable, but that doesn't look good either. Ideally I would like to see the following:
if !let fooResult = foo() {
// handle error and return
}
// fooResult has Bar type and can be used in the top level
Did I miss something in the specification or is there some another way to write good looking Swift code?
Your assumptions are correct—there isn't a "negated if-let" syntax in Swift.
I suspect one reason for that might be grammar integrity. Throughout Swift (and commonly in other C-inspired languages), if you have a statement that can bind local symbols (i.e. name new variables and give them values) and that can have a block body (e.g. if, while, for), those bindings are scoped to said block. Letting a block statement bind symbols to its enclosing scope instead would be inconsistent.
It's still a reasonable thing to think about, though — I'd recommend filing a bug and seeing what Apple does about it.
This is what pattern matching is all about, and is the tool meant for this job:
let x: String? = "Yes"
switch x {
case .Some(let value):
println("I have a value: \(value)")
case .None:
println("I'm empty")
}
The if-let form is just a convenience for when you don't need both legs.
If what you are writing is a set of functions performing the same sequence of transformation, such as when processing a result returned by a REST call (check for response not nil, check status, check for app/server error, parse response, etc.), what I would do is create a pipeline that at each steps transforms the input data, and at the end returns either nil or a transformed result of a certain type.
I chose the >>> custom operator, that visually indicates the data flow, but of course feel free to choose your own:
infix operator >>> { associativity left }
func >>> <T, V> (params: T?, next: T -> V?) -> V? {
if let params = params {
return next(params)
}
return nil
}
The operator is a function that receives as input a value of a certain type, and a closure that transforms the value into a value of another type. If the value is not nil, the function invokes the closure, passing the value, and returns its return value. If the value is nil, then the operator returns nil.
An example is probably needed, so let's suppose I have an array of integers, and I want to perform the following operations in sequence:
sum all elements of the array
calculate the power of 2
divide by 5 and return the integer part and the remainder
sum the above 2 numbers together
These are the 4 functions:
func sumArray(array: [Int]?) -> Int? {
if let array = array {
return array.reduce(0, combine: +)
}
return nil
}
func powerOf2(num: Int?) -> Int? {
if let num = num {
return num * num
}
return nil
}
func module5(num: Int?) -> (Int, Int)? {
if let num = num {
return (num / 5, num % 5)
}
return nil
}
func sum(params: (num1: Int, num2: Int)?) -> Int? {
if let params = params {
return params.num1 + params.num2
}
return nil
}
and this is how I would use:
let res: Int? = [1, 2, 3] >>> sumArray >>> powerOf2 >>> module5 >>> sum
The result of this expression is either nil or a value of the type as defined in the last function of the pipeline, which in the above example is an Int.
If you need to do better error handling, you can define an enum like this:
enum Result<T> {
case Value(T)
case Error(MyErrorType)
}
and replace all optionals in the above functions with Result<T>, returning Result.Error() instead of nil.
I've found a way that looks better than alternatives, but it uses language features in unrecommended way.
Example using code from the question:
let fooResult: Bar! = foo();
if fooResult == nil {
// handle error and return
}
// continue correct operations here
fooResult might be used as normal variable and it's not needed to use "?" or "!" suffixes.
Apple documentation says:
Implicitly unwrapped optionals are useful when an optional’s value is confirmed to exist immediately after the optional is first defined and can definitely be assumed to exist at every point thereafter. The primary use of implicitly unwrapped optionals in Swift is during class initialization, as described in Unowned References and Implicitly Unwrapped Optional Properties.
How about the following:
func foo(i:Int) ->Int? {
switch i {
case 0: return 0
case 1: return 1
default: return nil
}
}
var error:Int {
println("Error")
return 99
}
for i in 0...2 {
var bob:Int = foo(i) ?? error
println("\(i) produces \(bob)")
}
Results in the following output:
0 produces 0
1 produces 1
Error
2 produces 99