Weird optional type behaviour in swift forEach - swift

This code works fine. It iterates my array of one Int! and prints its magnitude.
import Foundation
let x : Int! = 1
[x].forEach {i in
print(i.magnitude)
}
Output:
1
Presumably, i in the loop body is an Int or an Int!, and indeed if I ask Xcode for "quick help" on forEach it reports:
func forEach(_ body: (Int) throws -> Void) rethrows
However, if perform two statements in my forEach body, instead it fails to compile, complaining that I need to unwrap i which now has the optional type Int?.
import Foundation
let x : Int! = 1
[x].forEach {i in
print(i.magnitude)
print(i.magnitude)
}
Compile error:
Value of optional type 'Int?' must be unwrapped to refer to member 'magnitude' of wrapped base type 'Int'
And if I ask for "quick help" now I get:
func forEach(_ body: (Int?) throws -> Void) rethrows
How on earth does the number of statements I place in my loop body manage to affect the type of the loop variable?

Basically, you've elicited an edge case of an edge case. You've combined two things that are the work of the devil:
Implicitly unwrapped Optionals
Implicit type inference of closures, along with the fact that
Implicit type inference of closures works differently when the closure consists of one line (this is where the "How on earth does the number of statements" comes in)
You should try to avoid both of those; your code will be cleaner and will compile a lot faster. Indeed, implicit type inference of anything other than a single literal, like a string, Int, or Double, is a huge drag on compilation times.
I won't pretend to imitate the compiler's reasoning; I'll just show you an actual solution (other than just not using an IUO in the first place):
[x].forEach {(i:Int) in
print(i.magnitude)
print(i.magnitude)
}
Our Int type is legal because we take advantage of the single "get out of jail free" card saying that an implicitly unwrapped Optional can be used directly where the unwrapped type itself is expected. And by explicitly stating the type, we clear up the compiler's doubts.
(I say "directly" because implicit unwrappedness of an Optional is not propagated thru passing and assignment. That is why in your second example you discovered Int?, not Int, being passed into the closure.)

Related

Using type as a value, why is the "self" keyword required here?

I'm currently learning type as a value in functions and wrote this sample code to play around:
import Foundation
class Animal {
func sound() {
print("Generic animal noises")
}
}
func foo(_ t:Animal) {
print("Hi")
}
foo(Animal) //Cannot convert value of type 'Animal.Type' to expected argument type 'Animal'
I'm not surprised by this result. Obviously you cant pass the type itself as an argument where an instance of that type is expected. But notice that the compiler says that the argument I passed was of type Animal.Type. So if I did this, it should compile right?
func foo(_ t:Animal.Type) {
print("Hi")
}
foo(Animal) //Expected member name or constructor call after type name
This is what really confuses me a heck ton, the compiler told me it was of type Animal.Type *but after making this change it once again shows an error.
Of course I listened to the fix Swift suggests and do:
foo(Animal.self) //Works correctly
But my biggest question is: WHY? Isn't Animal itself the type? Why does the compiler require me to use Animal.self to get the type? This really confuses me, I would like for some guidance.
Self-answering, with help of comments, I was able to find out the reason:
Using .self after the type name is called Postfix Self Expression:
A postfix self expression consists of an expression or the name of a
type, immediately followed by .self. It has the following forms:
expression.self
type.self
The first form evaluates to the value of the expression. For example, x.self evaluates to x.
The second form evaluates to the value of the type. Use this form to access a type as a value. For example, because SomeClass.self evaluates to the SomeClass type itself, you can pass it to a function or method that accepts a type-level argument.
Thus, the .self keyword is required to consider the type as a value capable of being passed as an argument to functions.

Swift type inference in methods that can throw and cannot

As you may know, Swift can infer types from usage. For example, you can have overloaded methods that differ only in return type and freely use them as long as compiler is able to infer type. For example, with help of additional explicitly typed variable that will hold return value of such method.
I've found some funny moments. Imagine this class:
class MyClass {
enum MyError: Error {
case notImplemented
case someException
}
func fun1() throws -> Any {
throw MyError.notImplemented
}
func fun1() -> Int {
return 1
}
func fun2() throws -> Any {
throw MyError.notImplemented
}
func fun2() throws -> Int {
if false {
throw MyError.someException
} else {
return 2
}
}
}
Of course, it will work like:
let myClass = MyClass()
// let resul1 = myClass.fun1() // error: ambiguous use of 'fun1()'
let result1: Int = myClass.fun1() // OK
But next you can write something like:
// print(myClass.fun1()) // error: call can throw but is not marked with 'try'
// BUT
print(try? myClass.fun1()) // warning: no calls to throwing functions occur within 'try' expression
so it looks like mutual exclusive statements. Compiler tries to choose right function; with first call it tries to coerce cast from Int to Any, but what it's trying to do with second one?
Moreover, code like
if let result2 = try? myClass.fun2() { // No warnings
print(result2)
}
will have no warning, so one may assume that compiler is able to choose right overload here (maybe based on fact, that one of the overloads actually returns nothing and only throws).
Am I right with my last assumption? Are warnings for fun1() logical? Do we have some tricks to fool compiler or to help it with type inference?
Obviously you should never, ever write code like this. It's has way too many ways it can bite you, and as you see, it is. But let's see why.
First, try is just a decoration in Swift. It's not for the compiler. It's for you. The compiler works out all the types, and then determines whether a try was necessary. It doesn't use try to figure out the types. You can see this in practice here:
class X {
func x() throws -> X {
return self
}
}
let y = try X().x().x()
You only need try one time, even though there are multiple throwing calls in the chain. Imagine how this would work if you'd created overloads on x() based on throws vs non-throws. The answer is "it doesn't matter" because the compiler doesn't care about the try.
Next there's the issue of type inference vs type coercion. This is type inference:
let resul1 = myClass.fun1() // error: ambiguous use of 'fun1()'
Swift will never infer an ambiguous type. This could be Any or it could beInt`, so it gives up.
This is not type inference (the type is known):
let result1: Int = myClass.fun1() // OK
This also has a known, unambiguous type (note no ?):
let x : Any = try myClass.fun1()
But this requires type coercion (much like your print example)
let x : Any = try? myClass.fun1() // Expression implicitly coerced from `Int?` to `Any`
// No calls to throwing function occur within 'try' expression
Why does this call the Int version? try? return an Optional (which is an Any). So Swift has the option here of an expression that returns Int? and coercing that to Any or Any? and coercing that to Any. Swift pretty much always prefers real types to Any (and it properly hates Any?). This is one of the many reasons to avoid Any in your code. It interacts with Optional in bizarre ways. It's arguable that this should be an error instead, but Any is such a squirrelly type that it's very hard to nail down all its corner cases.
So how does this apply to print? The parameter of print is Any, so this is like the let x: Any =... example rather than like the let x =... example.
A few automatic coercions to keep in mind when thinking about these things:
Every T can be trivially coerced to T?
Every T can be explicitly coerced to Any
Every T? can also be explicitly coerce to Any
Any can be trivially coerced to Any? (also Any??, Any???, and Any????, etc)
Any? (Any??, Any???, etc) can be explicitly coerced to Any
Every non-throwing function can be trivially coerced to a throwing version
So overloading purely on "throws" is dangerous
So mixing throws/non-throws conversions with Any/Any? conversions, and throwing try? into the mix (which promotes everything into an optional), you've created a perfect storm of confusion.
Obviously you should never, ever write code like this.
The Swift compiler always tries to call the most specific overloaded function is there are several overloaded implementations.
The behaviour shown in your question is expected, since any type in Swift can be represented as Any, so even if you type annotate the result value as Any, like let result2: Any = try? myClass.fun1(), the compiler will actually call the implementation of fun1 returning an Int and then cast the return value to Any, since that is the more specific overloaded implementation of fun1.
You can get the compiler to call the version returning Any by casting the return value to Any rather than type annotating it.
let result2 = try? myClass.fun1() as Any //nil, since the function throws an error
This behaviour can be even better observed if you add another overloaded version of fun1 to your class, such as
func fun1() throws -> String {
return ""
}
With fun1 having 3 overloaded versions, the outputs will be the following:
let result1: Int = myClass.fun1() // 1
print(try? myClass.fun1()) //error: ambiguous use of 'fun1()'
let result2: Any = try? myClass.fun1() //error: ambiguous use of 'fun1()'
let stringResult2: String? = try? myClass.fun1() // ""
As you can see, in this example, the compiler simply cannot decide which overloaded version of fun1 to use even if you add the Any type annotation, since the versions returning Int and String are both more specialized versions than the version returning Any, so the version returning Any won't be called, but since both specialized versions would be correct, the compiler cannot decide which one to call.

T, Optional<T> vs. Void, Optional<Void>

// this declaration / definition of variable is OK, as expected
var i = Optional<Int>.None
var j:Int?
// for the next line of code compiler produce a nice warning
// Variable 'v1' inferred to have type 'Optional<Void>' (aka 'Optional<()>'), which may be unexpected
var v1 = Optional<Void>.None
// but the next sentence doesn't produce any warning
var v2:Void?
// nonoptional version produce the warning 'the same way'
// Variable 'v3' inferred to have type '()', which may be unexpected
var v3 = Void()
// but the compiler feels fine with the next
var v4: Void = Void()
What is the difference? Why is Swift compiler always happy if the type is anything else than 'Void' ?
The key word in the warning is "inferred." Swift doesn't like inferring Void because it's usually not what you meant. But if you explicitly ask for it (: Void) then that's fine, and how you would quiet the warning if it's what you mean.
It's important to recognize which types are inferred and which are explicit. To infer is to "deduce or conclude (information) from evidence and reasoning rather than from explicit statements." It is not a synonym for "guess" or "choose." If the type is ambiguous, then the compiler will generate an error. The type must always be well-defined. The question is whether it is explicitly defined, or defined via inference based on explicit information.
This statement has a type inference:
let x = Foo()
The type of Foo() is explicitly known, but the type of x is inferred based on the type of the entire expression (Foo). It's well defined and completely unambiguous, but it's inferred.
This statement has no type inference:
let x: Foo = Foo()
But also, there are no type inferences here:
var x: Foo? = nil
x = Foo()
The type of x (Foo?) in the second line is explicit because it was explicitly defined in the line above.
That's why some of your examples generate warnings (when there is a Void inference) and others do not (when there is only explicit use of Void). Why do we care about inferred Void? Because it can happen by accident very easily, and is almost never useful. For example:
func foo() {}
let x = foo()
This is legal Swift, but it generates an "inferred to have type '()'" warning. This is a very easy error to make. You'd like a warning at least if you try to assign the result of something that doesn't return a result.
So how is it possible that we assign the result of something that doesn't return a result? It's because every function returns a result. We just are allowed to omit that information if the return is Void. It's important to remember that Void does not mean "no type" or "nothing." It is just a typealias for (), which is a tuple of zero elements. It is just as valid a type as Int.
The full form of the above code is:
func foo() -> () { return () }
let x = foo()
This returns the same warning, because it's the same thing. We're allowed to drop the -> () and the return (), but they exist, and so we could assign () to x if we wanted to. But it's incredibly unlikely that we'd want to. We almost certainly made a mistake and the compiler warns us about that. If for some reason we want this behavior, that's fine. It's legal Swift. We just have to be explicit about the type rather than rely on type inference, and the warning will go away:
let x: Void = foo()
Swift is being very consistent in generating warnings in your examples, and you really do want those warnings. It's not arbitrary at all.
EDIT: You added different example:
var v = Optional<Void>()
This generates the error: ambiguous use of 'init()'. That's because the compiler isn't certain whether you mean Optional.init() which would be .None, or Optional.init(_ some: ()), which would be .Some(()). Ambiguous types are forbidden, so you get a hard error.
In Swift any value will implicitly convert with it's equivalent 1-tuple. For example, 1 and (1) are different types. The first is an Int and the second is a tuple containing an Int. But Swift will silently convert between these for you (this is why you sometimes see parentheses pop up in surprising places in error messages). So foo() and foo(()) are the same thing. In almost every possible case, that doesn't matter. But in this one case, where the type really is (), that matters and makes things ambiguous.
var i = Optional<Int>()
This unambiguously refers to Optional.init() and returns nil.
The compiler is warning you about the "dummy" type Void, which is actually an alias for an empty tuple (), and which doesn't have too many usages.
If you don't clearly specify that you want your variable to be of type Void, and let the compiler infer the type, it will warn you about this, as it might be you didn't wanted to do this in the first place.
For example:
func doSomething() -> Void {
}
let a = doSomething()
will give you a warning as anyway there's only one possible value doSomething() can return - an empty tuple.
On the other hand,
let a: Void = doSomething()
will not generate a warning as you explicitly tell the compiler that you want a Void variable.

Using init() in map()

TL;DR
Why doesn't this work?
"abcdefg".characters.map(String.init) // error: type of expression is ambiguous without more context
Details
One really cool thing I like in Swift is the ability to convert a collection of one thing to another by passing in an init method (assuming an init() for that type exists).
Here's an example converting a list of tuples to instances of ClosedInterval.
[(1,3), (3,4), (4,5)].map(ClosedInterval.init)
That example also takes advantage of the fact that we can pass a tuple of arguments as a single argument as long as the tuple matches the function's argument list.
Here another example, this time converting a list of numbers to string instances.
(1...100).map(String.init)
Unfortunately, the next example does not work. Here I am trying to split up a string into a list of single-character strings.
"abcdefg".characters.map(String.init) // error: type of expression is ambiguous without more context
map() should be operating on a list of Character (and indeed I was able to verify in a playground that Swift infers the correct type of [Character] here being passed into map).
String definitely can be instantiated from a Character.
let a: Character = "a"
String(a) // this works
And interestingly, this works if the characters are each in their own array.
"abcdefg".characters.map { [$0] }.map(String.init)
Or the equivalent:
let cx2: [[Character]] = [["a"], ["b"], ["c"], ["d"]]
cx2.map(String.init)
I know that I could do this:
"abcdefg".characters.map { String($0) }
But I am specifically trying to understand why "abcdefg".characters.map(String.init) does not work (IMO this syntax is also more readable and elegant)
Simplified repro:
String.init as Character -> String
// error: type of expression is ambiguous without more context
This is because String has two initializers that accept one Character:
init(_ c: Character)
init(stringInterpolationSegment expr: Character)
As far as I know, there is no way to disambiguate them when using the initializer as a value.
As for (1...100).map(String.init), String.init is referred as Int -> String. Although there are two initializers that accept one Int:
init(stringInterpolationSegment expr: Int)
init<T : _SignedIntegerType>(_ v: T)
Generic type is weaker than explicit type. So the compiler choose stringInterpolationSegment: one in this case. You can confirm that by command + click on .init.

Why do I have to explicitly unwrap my string in this case?

I have a string var oneString: String! and later on in a method, when I want to concatenate a string to oneString I have to do this:
oneString! += anyString
If I don't add ! I get an error 'String!' is not identical to 'CGFloat'
If I initialize my string with var oneString = "" I don't have this problem. Why? Why do I need to unwrap oneString while I explicitly said it would not be nil when I declared it?
Why do I need to unwrap oneString while I explicitly said it would not be nil when I declared it?
You’ve misunderstood what var oneString: String! means. It does not mean oneString will not be nil. If you declare a type as var oneString: String, then you are declaring a type that cannot be nil.
The type String! is an “implicitly-unwrapped optional”. That is, it’s an optional, much like String?, but one that pretends to be a non-optional sometimes. Mostly for the purposes of reading it – you don’t have to explicitly unwrap it to get the value out. The downside being, if it is ever nil (and it totally can be), your code will trap and abort.
But this pretending-to-not-be-optional only goes so far. You cannot pass a String! to a function as inout when that argument is not an optional. Hence your problem.
Anton’s answer is completely correct in why it won’t work, and his suggested operator overload will make your code compile. But it isn’t the right solution – you should instead avoid using implicitly-unwrapped optionals as they are spring-loaded deathtraps and only to be used in specific circumstances (the most common being with Cocoa UI controls). 999 times out of 1,000 you would be better off with a regular optional or non-optional
Reason is that Foo, Foo? and Foo! are different types in Swift.
There are certain operators pre-defined for you "out-of-the-box" which allow great deal of transparency between Foo and Foo!, but still these types are not the same.
When it comes to strings, operator
func += (inout left: String!, right: String)
... is simply not defined.
If you declare it like:
func += (inout left: String!, right: String) {
left = left + right
}
... then your code should compile the way you like it, that is:
oneString! += anyString