Swift generics: How to represent 'no type'? - swift

I am using Google's promises library, and I would like to create promise without any type (because I don't need any).
However, I am being forced to pick some type:
let promise = Promise<SomeType>.pending()
Is there a type I could pass in place of SomeType that would essentialy mean 'no type', when I need promises just for async flow and exception catching, but I don't want to return a specific value from a function?
For example, some type whose only valid value is nil?
I have encountered this problem in multiple places, the only workaround I found so far is to always provide a non-generic alternative, but it gets really tedious and leads to duplicate code.

Types are sets of values, like Int is the set of all integer numbers, String is the set of all sequences of characters and so on.
If you consider the number of items in the set, there are some special types with 0 items and 1 items exactly, and are useful in special cases like this.
Never is the type with no values in it. No instance of a Never type can be constructed because there are no values that it can be (just like an enum with no cases). That is useful to mark situations, code flow etc as 'can't happen', for example the compiler can know that a function that returns Never, can never return. Or that a function that takes Never can never be called. A function that returns Result<Int, Never> will never fail, but is in the world of functions returning Result types. But because never can't be constructed it isn't what you want here. It would mean a Promise that can't be fulfilled.
Void is the type with exactly 1 value. It's normally spelled () on the left and Void on the right, of a function definition. The value of a void is not interesting. It's useful to make a function that returns Void because those are like what other languages call subroutines or procedures. In this case it means a Promise that can be fulfilled but not with any value. It can only be useful for its side effects, therefore.

Is it a correct solution or a workaround if we create an empty class
class Default_Class : Codable {
}
and use this as Promise<Default_Class>.pending

Related

Passing variables by reference in Swift

I started learning c++ and now I am wondering if I can do some things in Swift as well.
I never actually thought about what happens when we pass a variable as an argument to a function in Swift.
Let's use a variable of type string for examples.
In c++ I can pass an argument to a function either by making a copy of it, or by passing a reference/pointer.
void foo(string s) or void foo (string& s);
In the 1st case the copy of my original variable will be created, and foo will receive a copy. In the 2nd case, I basically pass an address of the variable in memory without creating a copy.
Now in Swift I know that I can declare an argument to a function to be inout, which means I can modify the original object.
1) func foo(s:String)...
2) func testPassingByReference(s: inout String)...
I made an extension to String to print the address of the object:
extension String {
func address() -> String {
return String(format: "%p", self);
}
}
The result was not that I expected to see.
var str = "Hello world"
print(str.address())
0x7fd6c9e04ef0
func testPassingByValue(s: String) {
print("he address of s is: \(s.address())")
}
func testPassingByReference(s: inout String) {
print("he address of s is: \(s.address())")
}
testPassingByValue(s: str)
0x7fd6c9e05270
testPassingByReference(s: &str)
0x7fd6c9e7caf0
I understand why the address is different when we pass an argument by value, but it's not what I expected to see when we pass an argument as an inout parameter.
Apple developer website says that
In Swift, Array, String, and Dictionary are all value types.
So the question is, is there any way to avoid copying objects that we pass to functions (I can have a pretty big array or a dictionary) or Swift doesn't allow us do such things?
Copying arrays and strings is cheap (almost free) as long as you don't modify it. Swift implements copy-on-write for these collections in the stdlib. This isn't a language-level feature; it's actually implemented explicitly for arrays and strings (and some other types). So you can have many copies of a large array that all share the same backing storage.
inout is not the same thing as "by reference." It is literally "in-out." The value is copied in at the start of the function, and then copied back to the original location at the end.
Swift's approach tends to be performant for common uses, but Swift doesn't make strong performance promises like C++ does. (That said, this allows Swift to be faster in some cases than C++ could be, because Swift isn't as restricted in its choice of data structures.) As a general rule, I find it very difficult to reason about the likely performance of arbitrary Swift code. It's easy to reason about the worst-case performance (just assume copies always happen), but it's hard to know for certain when a copy will be avoided.
Even though inout parameters modify the variable that was used as an input parameter to the function, they don't exactly work like by reference in other languages. The behaviour in Swift is called copy-in copy-out or call by value result. It means that when you use an inout parameter, at the time of the function call, its value is copied and inside the function, a local copy of the variable is modified. At the time of the functions return, it overwrites the value at the inout parameters original memory location with the modified copy value.
You are printing the address of the variable inside the function, hence you are actually printing the location of the copied value. Try printing after the function returned and you will see that you are printing the original location with the modified value.
For more information, see the In-Out parameters part of the documentation.

Cannot retrieve associated object [duplicate]

The code below can be run in a Swift Playground:
import UIKit
func aaa(_ key: UnsafeRawPointer!, _ value: Any! = nil) {
print(key)
}
func bbb(_ key: UnsafeRawPointer!) {
print(key)
}
class A {
var key = "aaa"
}
let a = A()
aaa(&a.key)
bbb(&a.key)
Here's the result printed on my mac:
0x00007fff5dce9248
0x00007fff5dce9220
Why the results of two prints differs? What's more interesting, when I change the function signature of bbb to make it the same with aaa, the result of two prints are the same. And if I use a global var instead of a.key in these two function calls, the result of two prints are the same. Does anyone knows why this strange behavior happens?
Why the results of two prints differs?
Because for each function call, Swift is creating a temporary variable initialised to the value returned by a.key's getter. Each function is called with a pointer to their given temporary variable. Therefore the pointer values will likely not be the same – as they refer to different variables.
The reason why temporary variables are used here is because A is a non-final class, and can therefore have its getters and setters of key overridden by subclasses (which could well re-implement it as a computed property).
Therefore in an un-optimised build, the compiler cannot just pass the address of key directly to the function, but instead has to rely on calling the getter (although in an optimised build, this behaviour can change completely).
You'll note that if you mark key as final, you should now get consistent pointer values in both functions:
class A {
final var key = "aaa"
}
var a = A()
aaa(&a.key) // 0x0000000100a0abe0
bbb(&a.key) // 0x0000000100a0abe0
Because now the address of key can just be directly passed to the functions, bypassing its getter entirely.
It's worth noting however that, in general, you should not rely on this behaviour. The values of the pointers you get within the functions are a pure implementation detail and are not guaranteed to be stable. The compiler is free to call the functions however it wishes, only promising you that the pointers you get will be valid for the duration of the call, and will have pointees initialised to the expected values (and if mutable, any changes you make to the pointees will be seen by the caller).
The only exception to this rule is the passing of pointers to global and static stored variables. Swift does guarantee that the pointer values you get will be stable and unique for that particular variable. From the Swift team's blog post on Interacting with C Pointers (emphasis mine):
However, interaction with C pointers is inherently
unsafe compared to your other Swift code, so care must be taken. In
particular:
These conversions cannot safely be used if the callee
saves the pointer value for use after it returns. The pointer that
results from these conversions is only guaranteed to be valid for the
duration of a call. Even if you pass the same variable, array, or
string as multiple pointer arguments, you could receive a different
pointer each time. An exception to this is global or static stored
variables. You can safely use the address of a global variable as a
persistent unique pointer value, e.g.: as a KVO context parameter.
Therefore if you made key a static stored property of A or just a global stored variable, you are guaranteed to the get same pointer value in both function calls.
Changing the function signature
When I change the function signature of bbb to make it the same with aaa, the result of two prints are the same
This appears to be an optimisation thing, as I can only reproduce it in -O builds and playgrounds. In an un-optimised build, the addition or removal of an extra parameter has no effect.
(Although it's worth noting that you should not test Swift behaviour in playgrounds as they are not real Swift environments, and can exhibit different runtime behaviour to code compiled with swiftc)
The cause of this behaviour is merely a coincidence – the second temporary variable is able to reside at the same address as the first (after the first is deallocated). When you add an extra parameter to aaa, a new variable will be allocated 'between' them to hold the value of the parameter to pass, preventing them from sharing the same address.
The same address isn't observable in un-optimised builds due to the intermediate load of a in order to call the getter for the value of a.key. As an optimisation, the compiler is able to inline the value of a.key to the call-site if it has a property initialiser with a constant expression, removing the need for this intermediate load.
Therefore if you give a.key a non-determininstic value, e.g var key = arc4random(), then you should once again observe different pointer values, as the value of a.key can no longer be inlined.
But regardless of the cause, this is a perfect example of how the pointer values for variables (which are not global or static stored variables) are not to be relied on – as the value you get can completely change depending on factors such as optimisation level and parameter count.
inout & UnsafeMutable(Raw)Pointer
Regarding your comment:
But since withUnsafePointer(to:_:) always has the correct behavior I want (in fact it should, otherwise this function is of no use), and it also has an inout parameter. So I assume there are implementation difference between these functions with inout parameters.
The compiler treats an inout parameter in a slightly different way to an UnsafeRawPointer parameter. This is because you can mutate the value of an inout argument in the function call, but you cannot mutate the pointee of an UnsafeRawPointer.
In order to make any mutations to the value of the inout argument visible to the caller, the compiler generally has two options:
Make a temporary variable initialised to the value returned by the variable's getter. Call the function with a pointer to this variable, and once the function has returned, call the variable's setter with the (possibly mutated) value of the temporary variable.
If it's addressable, simply call the function with a direct pointer to the variable.
As said above, the compiler cannot use the second option for stored properties that aren't known to be final (but this can change with optimisation). However, always relying on the first option can be potentially expensive for large values, as they'll have to be copied. This is especially detrimental for value types with copy-on-write behaviour, as they depend on being unique in order to perform direct mutations to their underlying buffer – a temporary copy violates this.
To solve this problem, Swift implements a special accessor – called materializeForSet. This accessor allows the callee to either provide the caller with a direct pointer to the given variable if it's addressable, or otherwise will return a pointer to a temporary buffer containing a copy of the variable, which will need to be written back to the setter after it has been used.
The former is the behaviour you're seeing with inout – you're getting a direct pointer to a.key back from materializeForSet, therefore the pointer values you get in both function calls are the same.
However, materializeForSet is only used for function parameters that require write-back, which explains why it's not used for UnsafeRawPointer. If you make the function parameters of aaa and bbb take UnsafeMutable(Raw)Pointers (which do require write-back), you should observe the same pointer values again.
func aaa(_ key: UnsafeMutableRawPointer) {
print(key)
}
func bbb(_ key: UnsafeMutableRawPointer) {
print(key)
}
class A {
var key = "aaa"
}
var a = A()
// will use materializeForSet to get a direct pointer to a.key
aaa(&a.key) // 0x0000000100b00580
bbb(&a.key) // 0x0000000100b00580
But again, as said above, this behaviour is not to be relied upon for variables that are not global or static.

Why UnsafeRawPointer shows different result when function signatures differs in Swift?

The code below can be run in a Swift Playground:
import UIKit
func aaa(_ key: UnsafeRawPointer!, _ value: Any! = nil) {
print(key)
}
func bbb(_ key: UnsafeRawPointer!) {
print(key)
}
class A {
var key = "aaa"
}
let a = A()
aaa(&a.key)
bbb(&a.key)
Here's the result printed on my mac:
0x00007fff5dce9248
0x00007fff5dce9220
Why the results of two prints differs? What's more interesting, when I change the function signature of bbb to make it the same with aaa, the result of two prints are the same. And if I use a global var instead of a.key in these two function calls, the result of two prints are the same. Does anyone knows why this strange behavior happens?
Why the results of two prints differs?
Because for each function call, Swift is creating a temporary variable initialised to the value returned by a.key's getter. Each function is called with a pointer to their given temporary variable. Therefore the pointer values will likely not be the same – as they refer to different variables.
The reason why temporary variables are used here is because A is a non-final class, and can therefore have its getters and setters of key overridden by subclasses (which could well re-implement it as a computed property).
Therefore in an un-optimised build, the compiler cannot just pass the address of key directly to the function, but instead has to rely on calling the getter (although in an optimised build, this behaviour can change completely).
You'll note that if you mark key as final, you should now get consistent pointer values in both functions:
class A {
final var key = "aaa"
}
var a = A()
aaa(&a.key) // 0x0000000100a0abe0
bbb(&a.key) // 0x0000000100a0abe0
Because now the address of key can just be directly passed to the functions, bypassing its getter entirely.
It's worth noting however that, in general, you should not rely on this behaviour. The values of the pointers you get within the functions are a pure implementation detail and are not guaranteed to be stable. The compiler is free to call the functions however it wishes, only promising you that the pointers you get will be valid for the duration of the call, and will have pointees initialised to the expected values (and if mutable, any changes you make to the pointees will be seen by the caller).
The only exception to this rule is the passing of pointers to global and static stored variables. Swift does guarantee that the pointer values you get will be stable and unique for that particular variable. From the Swift team's blog post on Interacting with C Pointers (emphasis mine):
However, interaction with C pointers is inherently
unsafe compared to your other Swift code, so care must be taken. In
particular:
These conversions cannot safely be used if the callee
saves the pointer value for use after it returns. The pointer that
results from these conversions is only guaranteed to be valid for the
duration of a call. Even if you pass the same variable, array, or
string as multiple pointer arguments, you could receive a different
pointer each time. An exception to this is global or static stored
variables. You can safely use the address of a global variable as a
persistent unique pointer value, e.g.: as a KVO context parameter.
Therefore if you made key a static stored property of A or just a global stored variable, you are guaranteed to the get same pointer value in both function calls.
Changing the function signature
When I change the function signature of bbb to make it the same with aaa, the result of two prints are the same
This appears to be an optimisation thing, as I can only reproduce it in -O builds and playgrounds. In an un-optimised build, the addition or removal of an extra parameter has no effect.
(Although it's worth noting that you should not test Swift behaviour in playgrounds as they are not real Swift environments, and can exhibit different runtime behaviour to code compiled with swiftc)
The cause of this behaviour is merely a coincidence – the second temporary variable is able to reside at the same address as the first (after the first is deallocated). When you add an extra parameter to aaa, a new variable will be allocated 'between' them to hold the value of the parameter to pass, preventing them from sharing the same address.
The same address isn't observable in un-optimised builds due to the intermediate load of a in order to call the getter for the value of a.key. As an optimisation, the compiler is able to inline the value of a.key to the call-site if it has a property initialiser with a constant expression, removing the need for this intermediate load.
Therefore if you give a.key a non-determininstic value, e.g var key = arc4random(), then you should once again observe different pointer values, as the value of a.key can no longer be inlined.
But regardless of the cause, this is a perfect example of how the pointer values for variables (which are not global or static stored variables) are not to be relied on – as the value you get can completely change depending on factors such as optimisation level and parameter count.
inout & UnsafeMutable(Raw)Pointer
Regarding your comment:
But since withUnsafePointer(to:_:) always has the correct behavior I want (in fact it should, otherwise this function is of no use), and it also has an inout parameter. So I assume there are implementation difference between these functions with inout parameters.
The compiler treats an inout parameter in a slightly different way to an UnsafeRawPointer parameter. This is because you can mutate the value of an inout argument in the function call, but you cannot mutate the pointee of an UnsafeRawPointer.
In order to make any mutations to the value of the inout argument visible to the caller, the compiler generally has two options:
Make a temporary variable initialised to the value returned by the variable's getter. Call the function with a pointer to this variable, and once the function has returned, call the variable's setter with the (possibly mutated) value of the temporary variable.
If it's addressable, simply call the function with a direct pointer to the variable.
As said above, the compiler cannot use the second option for stored properties that aren't known to be final (but this can change with optimisation). However, always relying on the first option can be potentially expensive for large values, as they'll have to be copied. This is especially detrimental for value types with copy-on-write behaviour, as they depend on being unique in order to perform direct mutations to their underlying buffer – a temporary copy violates this.
To solve this problem, Swift implements a special accessor – called materializeForSet. This accessor allows the callee to either provide the caller with a direct pointer to the given variable if it's addressable, or otherwise will return a pointer to a temporary buffer containing a copy of the variable, which will need to be written back to the setter after it has been used.
The former is the behaviour you're seeing with inout – you're getting a direct pointer to a.key back from materializeForSet, therefore the pointer values you get in both function calls are the same.
However, materializeForSet is only used for function parameters that require write-back, which explains why it's not used for UnsafeRawPointer. If you make the function parameters of aaa and bbb take UnsafeMutable(Raw)Pointers (which do require write-back), you should observe the same pointer values again.
func aaa(_ key: UnsafeMutableRawPointer) {
print(key)
}
func bbb(_ key: UnsafeMutableRawPointer) {
print(key)
}
class A {
var key = "aaa"
}
var a = A()
// will use materializeForSet to get a direct pointer to a.key
aaa(&a.key) // 0x0000000100b00580
bbb(&a.key) // 0x0000000100b00580
But again, as said above, this behaviour is not to be relied upon for variables that are not global or static.

I can't figure out how this switch statement is working?

func performMathAverage (mathFunc: String) -> ([Int]) -> Double {
switch mathFunc {
case "mean":
return mean
case "median":
return median
default:
return mode
}
}
I got this example from a swift learning book and its speaking of the topic of returning function types and this is just a part of the whole program I didn't want to copy and paste it all. My confusion is that the book says:
"Notice in performMathAverage , inside the switch cases, we return
either mean , median , or mode , and not mean() , median() , or mode()
. This is because we are not calling the methods, rather we are
returning a reference to it, much like function pointers in C. When
the function is actually called to get a value you add the parentheses
suffixed to the function name. Notice, too, that any of the average
functions could be called independently without the use of the
performMathAverage function. This is because mean , median , and mode
are called global functions ."
The main question is: "Why are we not calling the methods?"
and what do they mean we are returning a reference to it??
What do they mean by reference? Im just confused on this part.
You stated your main question as:
"Why are we not calling the methods?" and what do they mean we are returning a reference to it??
This is a little tricky to grasp at first, but what it's saying is that we don't want the result of the function, we want the function itself.
Sometimes things like this are easier to understand w/ a type alias:
Starting w/ [Int] -> Int, what we're saying there is "a function that takes an array of Ints and returns a single Int"
Let's make a type alias for clarity:
typealias AverageFunction = [Int] -> Int
Now our function (from your example) looks like this:
func performMathAverage(mathFunc: String) -> AverageFunction {
Although, the naming conventions here are pretty confusing since we're not performing anything, instead let's call it like this:
func getAverageFunctionWithIdentifier(identifier: String) -> AverageFunction {
Now it's very clear that this method is functioning like a factory that returns us an average function based on the identifier we provide. Now let's look at the implementation:
func getAverageFunctionWithIdentifier(identifier: String) -> AverageFunction {
switch identifier {
case "mean":
return mean
case "median":
return median
default:
return mode
}
}
So now, we're running a switch on the identifier to find the corresponding function. Again, we're not calling the function because we don't want the value, we want the function itself. Let's look at how we would call this:
let averageFunction = getAverageFunctionWithIdentifier("mean")
Now, averageFunction is a reference to the mean function which means we can use it to get the mean on an array of integers:
let mean = averageFunction([1,2,3,4,5])
But what if we wanted to use a different type of average, say median? We wouldn't have to change anything except for the identifier:
let averageFunction = getAverageFunctionWithIdentifier("median")
let median = averageFunction([1,2,3,4,5])
This example is pretty contrived, but the benefits of this is that by abstracting a function out to it's type (in this case [Int] -> Int, we can use any function that conforms to that type interchangeably.
This is functional programming!
This has to do with the functional programming aspects of swift. Here functions are treated like first class citizens meaning you can treat them like variables.
Why are we not calling the methods?
You are not calling the methods, because you have no argument to apply. The point of the function is to determine which function you would like to use. Of course the name of the function is terrible and does not accurately represent what the function does. It should be more like func determineMathFuncToUse, then you could use it like
var myFunc = determineMathFuncToUse("median")
// Now, you would be able to use myFunc just like you would use median
// e.g. myFunc(some_array) == median(some_array)
This is pretty easy to understand. In Swift you can store references to functions (the closest you can achieve in Objective-C is the reference to block).
func performMathAverage (mathFunc: String) -> ([Int]) -> Double
This is the function whose return type is:
([Int]) -> Double
As you can see the return type of this function is a function which accepts an array of Int and returns Double.
And in code you see that it returns one of three functions: mean, mode, and median. Each of these functions accepts an array of Int and returns Double.
Due to this code below:
let meanFunc = performMathAverage("mean")
let mean = meanFunc(someIntArray)
is identical to:
let mean = mean(someIntArray)
I hope this helps.
The reason why functions are NOT executed in code is because this example illustrates how you can STORE reference to functions.
It might be difficult to understand why you would want to do it in this particular case, but, hey, printing "Hello world" also seems meaningless :)
You are referring to an example in a tutorial so it is not strange that they are oversimplifying things. However, believe me, that in a real world there are many cases in which storing references to functions is very-very useful!
One obvious example is when you want to store reference to some completion handler which you want to execute at the end of some lengthy operation. And which can be different depending on the context from which you initiated this operation.
To answer your question, you are effectively returning the function itself, and not the result of calling that function. In this case, it lets you choose a function (using the switch statement) and evaluate it later. A helpful way to think about it is that functions are also a type of variable, and you can pass them around as well as evaluate them.
As a general stylistic thing, it's good practice to end each case with a break. It makes no difference here because return will also end the execution of the function, but without a break, all code after the correct case will be executed, not just the code within the correct case. Running into another case statement doesn't break out of the switch statement by itself.
Why are we not calling the methods?
Presumably, the method will be called later. The purpose of the switch statement is to return a function that can be used later.
What do they mean we are returning a reference to it? ... What do they mean by reference?
The "reference" language is a bit confusing - functions are reference types, but that isn't super important to what is going on. You can think of it as just returning a function.
The bottom line is that functions in swift can be used like any other type - they can be stored in variable or constants, they can be passed into a function as a parameter, and they can be returned from a function.
In this case, you have a function that is designed to return a function. If you want to obtain the mean, you pass the string "mean" and the function will return a new function that will obtain the mean when you call it.

Swift: Optional tuple type in Assert statement

I'm trying to implement a Priority Queue in Swift, and it's still in an initial stage. I would like to write tests for the data structure first.
Currently I have this method, extractMin, that returns the first element in the priority queue. It is implemented as follows (T is a generic value type).
// Remove and return the element with minimum priority.
// If the pq is empty, return nil.
func extractMin() -> (T, Double)? {
return nil // Implementation goes here
}
I would like to write tests for this function. The first thing I will check is that it returns nil when the pq is empty. And I want to have something like this:
XCTAssertNil(priorityQueue.extractMin(), "A nil value should be returned when the priority queue is empty");
However, this is prompting me an errors that reads "(String, Double) is not identical to AnyObject".
Is there any way to fix this? How shall I go about writing tests for this kind of optional tuple types?
Maybe you could consider returning a tuple of two optional values and check both? Or an optional class that has these two members.
This link also discusses the possibility of a missing "import Foundation" being the cause to this: http://www.scottlogic.com/blog/2014/09/24/swift-anyobject.html
Also see: How can one use XCTAssertNil with optional structs?