When filtering an array literal in swift, why does the result contain optionals? - swift

A contrived example to be sure, but why is the result an array of optionals?
let r = [1,2,3].filter { sourceElement in
return !["1", "2"].contains { removeElement in
sourceElement == Int(removeElement)
}
}
print(r.dynamicType)
Either type casting the source array or assigning it to a variable returns an array of Ints.
let seq = [1,2,3]
let r2 = seq.filter { sourceElement in
return !["1", "2"].contains { removeElement in
sourceElement == Int(removeElement)
}
}
print(r2.dynamicType) // "Array<Int>\n"
Shouldn't both results be of the same type?

I don’t think it’s necessarily a bug though it is confusing. It’s a question of where the promotion to optional happens to make the whole statement compile. A shorter repro that has the same behavior would be:
let i: Int? = 1
// x will be [Int?]
let x = [1,2,3].filter { $0 == i }
Bear in mind when you write nonOptional == someOptional the type of the lhs must be promoted to optional implicitly in order for it to work, because the == that you are using is this one in which both sides must be optional:
public func ==<T>(lhs: T?, rhs: T?) -> Bool
The compiler needs to promote something in this entire statement to be an optional, and what it chose was the integer literals inside [1,2,3]. You were instead expecting the promotion to happen at the point of the ==, so you could compare the non-optional sourceElement with the optional result of Int(_:String), but this isn’t necessarily guaranteed (not sure to what extent the ordering/precedence of these promotions is specced vs just the way the compiler was coded…)
The reason this doesn’t happen in the two-line version is when you write as one line let seq = [1,2,3], the type of seq is decided there. Then on the next line, the compiler doesn’t have as much latitude, therefore it must promote sourceElement to be an Int? so it can be compared with Int(removeElement) using ==.
Another way of making the code perform the conversion at the point you expect would be:
let r = [1,2,3].filter { sourceElement in
return !["1", "2"].contains { (removeElement: String)->Bool in
// force the optional upgrade to happen here rather than
// on the [1,2,3] literal...
Optional(sourceElement) == Int(removeElement)
}
}

Related

compactMap behaves differently when storing in an optional variable

Consider the following array.
let marks = ["86", "45", "thiry six", "76"]
I've explained my doubts in the following two cases.
Case#1
// Compact map without optionals
let compactMapped: [Int] = marks.compactMap { Int($0) }
print("\(compactMapped)")
Result - [86, 45, 76]
Case#2
// Compact map with optionals
let compactMappedOptional: [Int?] = marks.compactMap { Int($0) }
print("\(compactMappedOptional)")
Result - [Optional(86), Optional(45), nil, Optional(76)]
Why there is "nil" in the result of Case#2? Can anyone explain why is it not like this [Optional(86), Optional(45), Optional(76)] in Case#2? (PFA playground)
I submitted this behavior as a bug at bugs.swift.org, and it came back as "works as intended." I had to give the response some thought in order to find a way to explain it to you; I think this re-expresses it pretty accurately and clearly. Here we go!
To see what's going on here, let's write something like compactMap ourselves. Pretend that compactMap does three things:
Maps the original array through the given transform, which is expected to produce Optionals; in this particular example, it produces Int? elements.
Filters out nils.
Force unwraps the Optionals (safe because there are now no nils).
So here's the "normal" behavior, decomposed into this way of understanding it:
let marks = ["86", "45", "thiry six", "76"]
let result = marks.map { element -> Int? in
return Int(element)
}.filter { element in
return element != nil
}.map { element in
return element!
}
Okay, but in your example, the cast to [Int?] tells compactMap to output Int?, which means that its first map must produce Int??.
let result3 = marks.map { element -> Int?? in
return Int(element) // wrapped in extra Optional!
}.filter { element in
return element != nil
}.map { element in
return element!
}
So the first map produces double-wrapped Optionals, namely Optional(Optional(86)), Optional(Optional(45)), Optional(nil), Optional(Optional(76)).
None of those is nil, so they all pass thru the filter, and then they are all unwrapped once to give the result you're printing out.
The Swift expert who responded to my report admitted that there is something counterintuitive about this, but it's the price we pay for the automatic behavior where assigning into an Optional performs automatic wrapping. In other words, you can say
let i : Int? = 1
because 1 is wrapped in an Optional for you on the way into the assignment. Your [Int?] cast asks for the very same sort of behavior.
The workaround is to specify the transform's output type yourself, explicitly:
let result3 = marks.compactMap {element -> Int? in Int(element) }
That prevents the compiler from drawing its own conclusions about what the output type of the map function should be. Problem solved.
[You might also want to look at the WWDC 2020 video on type inference in Swift.]

Binary operator '===' cannot be applied to two 'String' operands

Why can't the === be used with String's in Swift? I am unable to compile the following:
let string1 = "Bob"
let string2 = "Fred"
if string1 === string2 {
...
}
and get the following error (on the if line):
Binary operator '===' cannot be applied to two 'String' operands
What I want to be able to do in my unit tests is, having performed a copyWithZone:, verify that two objects are indeed a different object with different pointers even if their values are the same. The following code doesn't work...
XCTAssertFalse(object1.someString === object2.someString)
If anyone knows of an alternative way please advise.
string1 and string2 are not NSString, but String. Since they are value objects, not reference objects, there is no reference that could be compared with ===.
Swift's === operator, by default, is only defined for classes.
Swift's String type is not a class but a struct. It does not inherit from AnyObject and therefore cannot be compared by reference.
You could of course implement an === operator for String in Swift, but I'm not sure how it would be any different from the implementation of == for Swift's String type.
func ===(lhs: String, rhs: String) -> Bool {
return lhs == rhs
}
Unless, of course, you really wanted to compare the references, I suppose you could do something like this:
func ===(lhs: String, rhs: String) -> Bool {
return unsafeAddressOf(lhs) == unsafeAddressOf(rhs)
}
However, for the sake of tests, rather than using the == or === operators, you should use the appropriate assertions:
XCTAssertEqual(foo, bar)
XCTAssertNotEqual(foo, bar)
The === operator is the identity operator. It checks if two variables or constants refer to the same instance of a class. Strings are not classes (they are structs) so the === operator does not apply to them.
If you want to check if two strings are the same, use the equality operator == instead.
Read all about the identity operator in the Swift documentation.
You can just check two objects for identity directly, instead of checking a property of type String.
XCTAssertFalse(object1 === object2)
Swift Strings are value type, not reference type, so there's no need for that, a copy will always be a different object.
You should just compare by value with ==.
If you try really hard, you can force things to happen, but I'm not sure what that buys you.
class MyClass: NSObject, NSCopying {
var someString: NSString = ""
required override init() {
super.init()
}
func copyWithZone(zone: NSZone) -> AnyObject {
let copy = self.dynamicType.init()
copy.someString = someString.copy() as? NSString ?? ""
return copy
}
}
let object1 = MyClass()
object1.someString = NSString(format: "%d", arc4random())
let object2 = object1.copy()
if object1.someString === object2.someString {
print("identical")
} else {
print("different")
}
prints identical, the system is really good at conserving strings.

Can a condition be used to determine the type of a generic?

I will first explain what I'm trying to do and how I got to where I got stuck before getting to the question.
As a learning exercise for myself, I took some problems that I had already solved in Objective-C to see how I can solve them differently with Swift. The specific case that I got stuck on is a small piece that captures a value before and after it changes and interpolates between the two to create keyframes for an animation.
For this I had an object Capture with properties for the object, the key path and two id properties for the values before and after. Later, when interpolating the captured values I made sure that they could be interpolated by wrapping each of them in a Value class that used a class cluster to return an appropriate class depending on the type of value it wrapped, or nil for types that wasn't supported.
This works, and I am able to make it work in Swift as well following the same pattern, but it doesn't feel Swift like.
What worked
Instead of wrapping the captured values as a way of enabling interpolation, I created a Mixable protocol that the types could conform to and used a protocol extension for when the type supported the necessary basic arithmetic:
protocol SimpleArithmeticType {
func +(lhs: Self, right: Self) -> Self
func *(lhs: Self, amount: Double) -> Self
}
protocol Mixable {
func mix(with other: Self, by amount: Double) -> Self
}
extension Mixable where Self: SimpleArithmeticType {
func mix(with other: Self, by amount: Double) -> Self {
return self * (1.0 - amount) + other * amount
}
}
This part worked really well and enforced homogeneous mixing (that a type could only be mixed with its own type), which wasn't enforced in the Objective-C implementation.
Where I got stuck
The next logical step, and this is where I got stuck, seemed to be to make each Capture instance (now a struct) hold two variables of the same mixable type instead of two AnyObject. I also changed the initializer argument from being an object and a key path to being a closure that returns an object ()->T
struct Capture<T: Mixable> {
typealias Evaluation = () -> T
let eval: Evaluation
let before: T
var after: T {
return eval()
}
init(eval: Evaluation) {
self.eval = eval
self.before = eval()
}
}
This works when the type can be inferred, for example:
let captureInt = Capture {
return 3.0
}
// > Capture<Double>
but not with key value coding, which return AnyObject:\
let captureAnyObject = Capture {
return myObject.valueForKeyPath("opacity")!
}
error: cannot invoke initializer for type 'Capture' with an argument list of type '(() -> _)'
AnyObject does not conform to the Mixable protocol, so I can understand why this doesn't work. But I can check what type the object really is, and since I'm only covering a handful of mixable types, I though I could cover all the cases and return the correct type of Capture. Too see if this could even work I made an even simpler example
A simpler example
struct Foo<T> {
let x: T
init(eval: ()->T) {
x = eval()
}
}
which works when type inference is guaranteed:
let fooInt = Foo {
return 3
}
// > Foo<Int>
let fooDouble = Foo {
return 3.0
}
// > Foo<Double>
But not when the closure can return different types
let condition = true
let foo = Foo {
if condition {
return 3
} else {
return 3.0
}
}
error: cannot invoke initializer for type 'Foo' with an argument list of type '(() -> _)'
I'm not even able to define such a closure on its own.
let condition = true // as simple as it could be
let evaluation = {
if condition {
return 3
} else {
return 3.0
}
}
error: unable to infer closure type in the current context
My Question
Is this something that can be done at all? Can a condition be used to determine the type of a generic? Or is there another way to hold two variables of the same type, where the type was decided based on a condition?
Edit
What I really want is to:
capture the values before and after a change and save the pair (old + new) for later (a heterogeneous collection of homogeneous pairs).
go through all the collected values and get rid of the ones that can't be interpolated (unless this step could be integrated with the collection step)
interpolate each homogeneous pair individually (mixing old + new).
But it seems like this direction is a dead end when it comes to solving that problem. I'll have to take a couple of steps back and try a different approach (and probably ask a different question if I get stuck again).
As discussed on Twitter, the type must be known at compile time. Nevertheless, for the simple example at the end of the question you could just explicitly type
let evaluation: Foo<Double> = { ... }
and it would work.
So in the case of Capture and valueForKeyPath: IMHO you should cast (either safely or with a forced cast) the value to the Mixable type you expect the value to be and it should work fine. Afterall, I'm not sure valueForKeyPath: is supposed to return different types depending on a condition.
What is the exact case where you would like to return 2 totally different types (that can't be implicitly casted as in the simple case of Int and Double above) in the same evaluation closure?
in my full example I also have cases for CGPoint, CGSize, CGRect, CATransform3D
The limitations are just as you have stated, because of Swift's strict typing. All types must be definitely known at compile time, and each thing can be of only one type - even a generic (it is resolved by the way it is called at compile time). Thus, the only thing you can do is turn your type into into an umbrella type that is much more like Objective-C itself:
let condition = true
let evaluation = {
() -> NSObject in // *
if condition {
return 3
} else {
return NSValue(CGPoint:CGPointMake(0,1))
}
}

How to handle initial nil value for reduce functions

I would like to learn and use more functional programming in Swift. So, I've been trying various things in playground. I don't understand Reduce, though. The basic textbook examples work, but I can't get my head around this problem.
I have an array of strings called "toDoItems". I would like to get the longest string in this array. What is the best practice for handling the initial nil value in such cases? I think this probably happens often. I thought of writing a custom function and use it.
func optionalMax(maxSofar: Int?, newElement: Int) -> Int {
if let definiteMaxSofar = maxSofar {
return max(definiteMaxSofar, newElement)
}
return newElement
}
// Just testing - nums is an array of Ints. Works.
var maxValueOfInts = nums.reduce(0) { optionalMax($0, $1) }
// ERROR: cannot invoke 'reduce' with an argument list of type ‘(nil, (_,_)->_)'
var longestOfStrings = toDoItems.reduce(nil) { optionalMax(count($0), count($1)) }
It might just be that Swift does not automatically infer the type of your initial value. Try making it clear by explicitly declaring it:
var longestOfStrings = toDoItems.reduce(nil as Int?) { optionalMax($0, count($1)) }
By the way notice that I do not count on $0 (your accumulator) since it is not a String but an optional Int Int?
Generally to avoid confusion reading the code later, I explicitly label the accumulator as a and the element coming in from the serie as x:
var longestOfStrings = toDoItems.reduce(nil as Int?) { a, x in optionalMax(a, count(x)) }
This way should be clearer than $0 and $1 in code when the accumulator or the single element are used.
Hope this helps
Initialise it with an empty string "" rather than nil. Or you could even initialise it with the first element of the array, but an empty string seems better.
Second go at this after writing some wrong code, this will return the longest string if you are happy with an empty string being returned for an empty array:
toDoItems.reduce("") { count($0) > count($1) ? $0 : $1 }
Or if you want nil, use
toDoItems.reduce(nil as String?) { count($0!) > count($1) ? $0 : $1 }
The problem is that the compiler cannot infer the types you are using for your seed and accumulator closure if you seed with nil, and you also need to get the optional type correct when using the optional string as $0.

swift, optional unwrapping, reversing if condition

Let's say I have function which returns optional. nil if error and value if success:
func foo() -> Bar? { ... }
I can use following code to work with this function:
let fooResultOpt = foo()
if let fooResult = fooResultOpt {
// continue correct operations here
} else {
// handle error
}
However there are few problems with this approach for any non-trivial code:
Error handling performed in the end and it's easy to miss something. It's much better, when error handling code follows function call.
Correct operations code is indented by one level. If we have another function to call, we have to indent one more time.
With C one usually could write something like this:
Bar *fooResult = foo();
if (fooResult == null) {
// handle error and return
}
// continue correct operations here
I found two ways to achieve similar code style with Swift, but I don't like either.
let fooResultOpt = foo()
if fooResult == nil {
// handle error and return
}
// use fooResultOpt! from here
let fooResult = fooResultOpt! // or define another variable
If I'll write "!" everywhere, it just looks bad for my taste. I could introduce another variable, but that doesn't look good either. Ideally I would like to see the following:
if !let fooResult = foo() {
// handle error and return
}
// fooResult has Bar type and can be used in the top level
Did I miss something in the specification or is there some another way to write good looking Swift code?
Your assumptions are correct—there isn't a "negated if-let" syntax in Swift.
I suspect one reason for that might be grammar integrity. Throughout Swift (and commonly in other C-inspired languages), if you have a statement that can bind local symbols (i.e. name new variables and give them values) and that can have a block body (e.g. if, while, for), those bindings are scoped to said block. Letting a block statement bind symbols to its enclosing scope instead would be inconsistent.
It's still a reasonable thing to think about, though — I'd recommend filing a bug and seeing what Apple does about it.
This is what pattern matching is all about, and is the tool meant for this job:
let x: String? = "Yes"
switch x {
case .Some(let value):
println("I have a value: \(value)")
case .None:
println("I'm empty")
}
The if-let form is just a convenience for when you don't need both legs.
If what you are writing is a set of functions performing the same sequence of transformation, such as when processing a result returned by a REST call (check for response not nil, check status, check for app/server error, parse response, etc.), what I would do is create a pipeline that at each steps transforms the input data, and at the end returns either nil or a transformed result of a certain type.
I chose the >>> custom operator, that visually indicates the data flow, but of course feel free to choose your own:
infix operator >>> { associativity left }
func >>> <T, V> (params: T?, next: T -> V?) -> V? {
if let params = params {
return next(params)
}
return nil
}
The operator is a function that receives as input a value of a certain type, and a closure that transforms the value into a value of another type. If the value is not nil, the function invokes the closure, passing the value, and returns its return value. If the value is nil, then the operator returns nil.
An example is probably needed, so let's suppose I have an array of integers, and I want to perform the following operations in sequence:
sum all elements of the array
calculate the power of 2
divide by 5 and return the integer part and the remainder
sum the above 2 numbers together
These are the 4 functions:
func sumArray(array: [Int]?) -> Int? {
if let array = array {
return array.reduce(0, combine: +)
}
return nil
}
func powerOf2(num: Int?) -> Int? {
if let num = num {
return num * num
}
return nil
}
func module5(num: Int?) -> (Int, Int)? {
if let num = num {
return (num / 5, num % 5)
}
return nil
}
func sum(params: (num1: Int, num2: Int)?) -> Int? {
if let params = params {
return params.num1 + params.num2
}
return nil
}
and this is how I would use:
let res: Int? = [1, 2, 3] >>> sumArray >>> powerOf2 >>> module5 >>> sum
The result of this expression is either nil or a value of the type as defined in the last function of the pipeline, which in the above example is an Int.
If you need to do better error handling, you can define an enum like this:
enum Result<T> {
case Value(T)
case Error(MyErrorType)
}
and replace all optionals in the above functions with Result<T>, returning Result.Error() instead of nil.
I've found a way that looks better than alternatives, but it uses language features in unrecommended way.
Example using code from the question:
let fooResult: Bar! = foo();
if fooResult == nil {
// handle error and return
}
// continue correct operations here
fooResult might be used as normal variable and it's not needed to use "?" or "!" suffixes.
Apple documentation says:
Implicitly unwrapped optionals are useful when an optional’s value is confirmed to exist immediately after the optional is first defined and can definitely be assumed to exist at every point thereafter. The primary use of implicitly unwrapped optionals in Swift is during class initialization, as described in Unowned References and Implicitly Unwrapped Optional Properties.
How about the following:
func foo(i:Int) ->Int? {
switch i {
case 0: return 0
case 1: return 1
default: return nil
}
}
var error:Int {
println("Error")
return 99
}
for i in 0...2 {
var bob:Int = foo(i) ?? error
println("\(i) produces \(bob)")
}
Results in the following output:
0 produces 0
1 produces 1
Error
2 produces 99