I write this function in playground which has value parameter of tuple type and return type is tuple.
func tuple(value: (Int, Int) ) -> (first: Int, second: Int) {
let firstGrabTuple = value.0 + 5
let secondGrabTuple = value.1 + 5
return (firstGrabTuple, secondGrabTuple)
}
Then I assigned it to constant called closure
let closure = tuple
//(Playground showing) let closure became a function of type (Int, Int) -> (first: Int, second: Int)
Then for double check I write another function that takes closure as its parameter, of type ((Int, Int)) -> (Int, Int)
func anotherTuple(_ closure: ((Int, Int)) -> (Int, Int)) {
closure((5, 5))
}
when I call anotherTuple
anotherTuple { (x) -> (Int, Int) in
let first = x.0 + 5
let second = x.1 + 5
return (first, second)
}
//prints .0 10, .1 10 as expected
So My question is as mention above when first function tuple I assigned it to constant called closure became of type (Int, Int) -> (first: Int, second: Int). But in second function If i have to use a parameter of type as tuple I have to set its parameter in double parentheses like (_ closure: ((Int, Int)) -> (Int, Int)).
But If I remove those double parentheses from anotherTuple function parameter then it will only expect 2 values as multi-argument function. No error for used as multi-argument function instead of tuple argument why is that? Added image for more detail.
and second question is why that
let closure = tuple
//let closure became a function of type (Int, Int) -> (first: Int, second: Int)
not became of type ((Int, Int)) -> (first: Int, second: Int)
**Note - When I tried to pass that constant named closure as argument to another function which will expect closure of type ((Int, Int)) -> (Int, Int) then while typing closure showing me it as ((Int, Int)) -> (Int, Int) in auto complete code.
You're correct about the confusion. I'm somewhat going to just restate your understanding, and say "yes, tuples in Swift are weird and slightly broken."
Swift does not distinguish between functions that take tuples vs multiple arguments in some ways, but does distinguish in other ways. (This is one of many reasons to avoid tuples at this point in Swift's life.)
First, these two functions have almost the same type in Swift (note that Void and () are identical):
func tuple(value: (Int, Int) ) {} // (Int, Int) -> Void
func two(value1: Int, value2: Int) {} // (Int, Int) -> Void
They look the same. But if you define a function that accepts that type, it can't take tuple:
func take(f: (Int, Int) -> Void) {}
take(f: tuple) // Fails
take(f: two) // ok
But if you define a function that takes ((Int, Int)) -> Void it can accept either function:
func take(f: ((Int, Int)) -> Void) {}
take(f: tuple) // ok
take(f: two) // ok
That suggests that (Int, Int) -> Void is a subtype of ((Int, Int)) -> Void.
But variables belie this:
var f: (Int, Int) -> Void
f = tuple // Ok
f = two // Ok
var g: ((Int, Int)) -> Void
g = tuple // ok
g = two // ok
g = f // ok
f = g // ok
And that suggests that (Int, Int) -> Void and ((Int, Int)) -> Void are the same type. But the take function indicates they're not.
Yes. All of that is true at the same time. Functions that accept tuples are not a coherent type in Swift (they do not occupy a clearly-deliniated spot in the type hierarchy). That's just where we are in Swift today.
Old, incorrect answer.
This is just a quirk of Playgrounds, and you should feel free to open a radar about that.
If you check type(of: closure), it'll print the type you expect. Swift knows the correct type; Playgrounds just displays the wrong thing.
Related
What is exactly the type of this const (val)?
let val = { (a: Int, b:Int)->Int in a + b }(1 , 2)
Isn't it (Int,Int) -> Int ?
It is just Int, because it takes only the final result, you call a function there that takes 2 Ints and return their adding result, which is going to be 3
In order to determine the type of a variable in the future you could just hold option key on the keyboard and hover over the variable name
There's already a good answer here, but I'd like to provide some more details for the records.
The following is a closure expression of type (Int, Int)->Int:
{ (a: Int, b:Int)->Int in a + b }
You could as well have defined an equivalent named function:
func f (_ a: Int, _ b:Int)->Int { a+b }
You could then have called the function with two parameters to get an Int value:
let val = f(1,2)
And you can do the same thing by replacing the function name with a closure expression:
let val = { (a: Int, b:Int)->Int in a + b }(1 , 2)
You could even combine the two practice and display step by step the type:
let l = { (a: Int, b:Int)->Int in a + b } // name the closure using a variable
print (type(of: l)) // (Int, Int)->Int
let val = l(1 , 2) // use the closure
print (type(of: val)) // Int
print (val)
I have 2 overloaded functions and I'm trying to call them using their signature in caller using generic. For example:
func sum(_ i: Int, _ s: Int){
print("sum \(i+s)")
}
func sum(_ i: String, _ s: Int){
var res: String="";
for _ in 1...s{
res += i
}
print("Repeat \(res)")
}
func sumCaller<T>(_ f: T){
print(type(of: f)) // compiler prints the value is function (Int,Int)->Void !!!
}
sumCaller(sum as (Int, Int)->Void)
Everything fine here. But when I'm trying to call f in sumCaller like this
func sumCaller<T>(_ f: T){
print(type(of: f)) //compiler prints the value is function (Int,Int)->Void !!!
f(1,3)
}
compiler throwing error:
error: testing.playground:68:5: error: cannot call value of non-function type 'T' f(1,3)
But it's still the same function I pass as sumCaller argument. Please, can someone explain me my mistake?
I'm not sure what your goal is but the bellow code "does something". Hope it's what you're looking for:
func sum(_ i: Int, _ s: Int){
print("sum \(i+s)")
}
func sum(_ i: String, _ s: Int){
var res: String="";
for _ in 1...s{
res += i
}
print("Repeat \(res)")
}
func sumCaller<A, B>(_ f: (A, B) -> Void, a: A, b: B) {
print(type(of: f)) // compiler prints the value is function (Int,Int)->Void !!!
f(a, b)
}
sumCaller(sum, a: 2, b: 3)
sumCaller(sum, a: "Test", b: 3)
When you're using a generic function, T is a placeholder type name instead of an actual type name, as you already know. A placeholder could be of any type including String, Dictionary, Int, etc. Only when the function is called, the actual type is inserted in place of T. This process of replacing the generic type with an actual type is called specializing or resolving the generic. In order for the generic to be resolved, the compiler demands that there is no ambiguity as to what this T is, otherwise it will throw an error, which is what's happening here.
When you look at this code:
func sumCaller<T>(_ f: T){
print(type(of: f)) //compiler prints the value is function (Int,Int)->Void !!!
f(1,3)
}
there is no chance for this function to be called and have T substituted with an actual type, which means, T is still a placeholder and could be of any type. For example, f could be of String type, which means f(1,3) equates to String(1,3), which doesn't make sense. The compiler is letting you know that there is still ambiguity as to what T could be.
I think it will be easiest to understand this if you just forget about T and the generic and just think about what the compiler knows vs. what is true at runtime.
Consider first just this code:
func myFunc(_ : Int, _ : Int) { print("it worked") }
let f : Any = myFunc
print(type(of: f)) // (Int, Int) -> ()
That code compiles and runs and, as my comment indicates, prints (Int, Int) -> (). But that is a runtime print. When the app runs, the runtime looks at what f really is and prints that out.
But long before the code runs, it must compile. The compiler knows only how things are typed externally. So as far as the compiler is concerned, f is an Any and that's all it is. You know that f is a function, but the compiler does not know that!
So, now we'll add a line to our code, and it won't compile any more:
func myFunc(_ : Int, _ : Int) { print("it worked") }
let f : Any = myFunc
print(type(of: f))
f(1,2) // cannot call value of non-function type 'Any'
Do you see? The compiler has no idea there is a function inside f. It sees only f, which is an Any. It is not until runtime that f will have a genuine value that can be examined.
We can solve the problem by forcing the compiler to believe that this will turn out to be a function at runtime:
func myFunc(_ : Int, _ : Int) { print("it worked") }
let f : Any = myFunc
print(type(of: f)) // (Int, Int) -> ()
(f as! (Int,Int) -> ())(1,2) // it worked
That compile compiles and runs and the function is successfully called, as we know because it prints out "it worked".
What happened? Our use of as! (a forced cast) tells the compiler to throw away its own beliefs and just accept blindly our assertion that when the app runs, f will turn out to be a function. So the compiler lets us pretend that that is true, and takes away its objection and lets us run the code. We told the truth, so our code runs successfully too.
But if we had lied — if f were some other sort of thing — then the compiler would still not object, but our app would crash at the moment of the as! cast.
func myFunc(_ : Int, _ : Int) { print("it worked") }
let f : Any = 1
print(type(of: f)) // (Int, Int) -> ()
(f as! (Int,Int) -> ())(1,2) // crash
Let's say I have an array of integers, and I want to get the sum of all the even numbers and the sum of all the odd numbers. For example, for the array [1,2,3], sum of all the odd numbers is 4, and sum of all the even numbers is 2.
This is how I do it:
array.reduce((odd: 0, even: 0), { (result, int) in
if int % 2 == 0 {
return (result.odd, result.even + int)
} else {
return (result.odd + int, result.even)
}
})
This worked fine on its own, but as soon as I try to deconstruct the tuple returned:
let (oddSum, evenSum) = a.reduce((odd: 0, even: 0), { (result, int) in
if int % 2 == 0 {
return (result.odd, result.even + int)
} else {
return (result.odd + int, result.even)
}
})
It gives me the error:
Value of tuple type '(Int, Int)' has no member 'odd'
on the return statements.
Why does deconstructing the tuple cause the generic type to be inferred differently? The deconstruction part should just say what to do to the result. The method call should have been interpreted on its own, and then matched against the pattern (oddSum, evenSum).
To fix this I have to change the first parameter to (0, 0), which makes the stuff in the closure very unreadable. I have to refer to the odd sum as result.0 and even sum as result.1.
TL; DR
This behaviour is unfortunate, but is 'working as expected' due to a combination of:
The constraint system favouring un-labelled tuple types over labelled tuple types when it comes to type variable binding.
Multi-statement closures not participating in type inference.
Why does deconstructing the tuple cause the generic type to be inferred differently? The deconstruction part should just say what to do to the result. The method call should have been interpreted on its own, and then matched against the pattern (evenSum, oddSum).
The type checker does bidirectional type inference, meaning that the pattern used can influence how the assigned expression is type checked. For example, consider:
func magic<T>() -> T {
fatalError()
}
let x: Int = magic() // T == Int
The type of the pattern is used to infer that T is Int.
So what happens with a tuple deconstruction pattern?
let (x, y) = magic() // error: Generic parameter 'T' could not be inferred
The type checker creates two type variables to represent each element of the tuple pattern. Such type variables are used internally within the constraint solver, and each must be bound to a Swift type before the constraint system can be considered solved. Within the constraint system, the pattern let (x, y) has the type ($T0, $T1), where $T{N} is a type variable.
The function returns the generic placeholder T, so the constraint system deduces that T is convertible to ($T0, $T1). However there's no further information for what $T0 and $T1 can be bound to, so the system fails.
Okay, let's give the system a way to bind types to those type variables by adding a parameter to the function.
func magic<T>(_ x: T) -> T {
print(T.self)
fatalError()
}
let labelledTuple: (x: Int, y: Int) = (x: 0, y: 0)
let (x, y) = magic(labelledTuple) // T == (Int, Int)
This now compiles, and we can see that the generic placeholder T is inferred to be (Int, Int). How did this happen?
magic is of type (T) -> T.
The argument is of type (x: Int, y: Int).
The result pattern is of type ($T0, $T1).
Here we can see that the constraint system has two options, it can either:
Bind T to the un-labelled tuple type ($T0, $T1), forcing the argument of type (x: Int, y: Int) to perform a tuple conversion that strips it of its labels.
Bind T to the labelled tuple type (x: Int, y: Int), forcing the returned value to perform a tuple conversion that strips it of its labels such that it can be converted to ($T0, $T1).
(this is glossing over the fact that generic placeholders are opened into fresh type variables, but that's an unnecessary detail here)
Without any rule to favour one option over the other, this is ambiguous. Luckily however the constraint system has a rule to prefer the un-labelled version of a tuple type when binding a type. Therefore the constraint system decides to bind T to ($T0, $T1), at which point both $T0 and $T1 can be bound to Int due to the fact that (x: Int, y: Int) needs to be convertible to ($T0, $T1).
Let's see what happens when we remove the tuple deconstruction pattern:
func magic<T>(_ x: T) -> T {
print(T.self)
fatalError()
}
let labelledTuple: (x: Int, y: Int) = (x: 0, y: 0)
let tuple = magic(labelledTuple) // T == (x: Int, y: Int)
T now gets bound to (x: Int, y: Int). Why? Because the pattern type is now simply of type $T0.
If T gets bound to $T0, then $T0 will be bound to the argument type (x: Int, y: Int).
If T gets bound to (x: Int, y: Int), then $T0 will also get bound to (x: Int, y: Int).
In both cases, the solution is the same, so no ambiguity. There's no possibility of T getting bound to an un-labelled tuple type simply due to the fact that no un-labelled tuple type is introduced into the system in the first place.
So, how does this apply to your example? Well, magic is just reduce without the additional closure argument:
public func reduce<Result>(
_ initialResult: Result,
_ nextPartialResult: (_ partialResult: Result, Element) throws -> Result
) rethrows -> Result
When you do:
let (oddSum, evenSum) = a.reduce((odd: 0, even: 0), { (result, int) in
if int % 2 == 0 {
return (result.odd, result.even + int)
} else {
return (result.odd + int, result.even)
}
})
If we ignore the closure for now, we have the same choice of bindings for Result:
Bind Result to the un-labelled tuple type ($T0, $T1), forcing the argument of type (odd: Int, even: Int) to perform a tuple conversion that strips it of its labels.
Bind Result to the labelled tuple type (odd: Int, even: Int), forcing the returned value to perform a tuple conversion that strips it of its labels such that it can be converted to ($T0, $T1).
And because of the rule to favour the un-labelled form of tuple, the constraint solver chooses to bind Result to ($T0, $T1), which gets resolved to (Int, Int). Removing the tuple decomposition works because you no longer introduce the type ($T0, $T1) into the constraint system – meaning that Result can only be bound to (odd: Int, even: Int).
Okay, but let's consider the closure again. We're clearly accessing the members .odd and .even on the tuple, so why can't the constraint system figure out that the binding of Result to (Int, Int) isn't viable? Well, this is due to the fact that multiple statement closures don't participate in type inference. This means that the closure body is solved independently to the call to reduce, so by the time the constraint system realises that the binding (Int, Int) is invalid, it's too late.
If you reduce the closure down to a single statement, this restriction is lifted and the constraint system can correctly discount (Int, Int) as a valid binding for Result:
let (oddSum, evenSum) = a.reduce((odd: 0, even: 0), { (result, int) in
return int % 2 == 0 ? (result.odd, result.even + int)
: (result.odd + int, result.even)
})
Or if you change the pattern to use the corresponding tuple labels, as pointed out by Martin, the type of the pattern is now (odd: $T0, even: $T1), which avoids the introduction of the un-labelled form into the constraint system:
let (odd: oddSum, even: evenSum) = a.reduce((odd: 0, even: 0), { (result, int) in
if int % 2 == 0 {
return (result.odd, result.even + int)
} else {
return (result.odd + int, result.even)
}
})
Another option, as pointed out by Alladinian, is to explicitly annotate the closure parameter type:
let (oddSum, evenSum) = a.reduce((odd: 0, even: 0), { (result: (odd: Int, even: Int), int) in
if int % 2 == 0 {
return (result.odd, result.even + int)
} else {
return (result.odd + int, result.even)
}
})
Note however that unlike the previous two examples, this causes Result to be bound to (Int, Int) due to the pattern introducing the preferred type ($T0, $T1). What allows this example to compile is that fact that the compiler inserts a tuple conversion for the passed closure, which re-adds the tuple labels for its parameter.
rk4_func(
y_array: [Double],
f_array: [(([Double], Double) -> Double)],
t_val: Double,
h_val: Double)
-> [Double]
I don't understand how to use the argument f_array: [(([Double], Double) -> Double)]. How exactly do I pass that in when calling the function?
That's kind of tricky :)
A quick remainder about Swift's function types: a type of (Int) -> Float means: a function that takes an int and returns a float. Okay, now back to your question:
As the argument f_array says (or, at least, tries to), it expects an array of functions. Inside this array, each function then accepts two arguments:
a doubles array: [Double]
a single double: Double
and returns a Double.
A quick example to get you going:
func f(_ a: [Double], _ b: Double) -> Double { ... }
func g(_ a: [Double], _ b: Double) -> Double { ... }
let f_array = [f, g]
There's a fair share of Swift magic up there. Please let me know if you need any further clarification.
I want to declare a local variable which is of type closure how do I do that?
var myClosure: (myParamaterTypes) -> myReturnTypes
eg:
var myClosure: (Int, Int) -> Int // type is (Int, Int) -> Int
myClosure = {(integer01: Int, integer02: Int) -> Int in return integer01 + integer02}