Can a swift computed property getter mutate a structure? - swift

I have a struct wrapping a var data:[T] that also supplies some statistics on the internal Array. One statistic is the max value, which can be an expensive operation because it requires searching the every element to determine the max value-- so I'd like to cache the max value and only recalculate it if I need to:
private mutating func getMax()->T? {
if let m=maxValue {
return m
}
else if data.count>0 {
maxValue=data.maxElement()
return maxValue
}
else {
return nil
}
}
That seems to work fine as a method, but I can't figure out how to do the same thing as a computed property.
var max:T? {return getMax()}
leads to a complaint that the accessor needs to be marked "mutating" because getMax() is mutating (actually I'd put the getMax code into the property accessor, but it's easier to not rewrite the code here).
Xcode suggests I rewrite the code thusly:
var max:T? mutating {return getMax()}
which then flags another problem and Xcode suggests putting a semicolon before mutating which leads to a suggestion to put another semicolon after mutating and then yet another semicolon after mutating and it's clear the compiler isn't even trying to help but just has a semicolon fetish.
Is there a way to write a computed property that permits caching values or am I stuck writing this as a method?

The correct syntax, despite the compiler's suggestions, would be:
var max:T? {
mutating get {return getMax()}
}

Related

Capturing a property of an object for a closure in Swift

I'm rewriting some code to append jobs to an array of closures rather than execute them directly:
var someObject: SomeType?
var jobsArray: [() -> ()] = []
// before rewriting
doExpensiveOperation(someObject!.property)
// 1st attempt at rewriting
jobsArray.append {
doExpensiveOperation(someObject!.property)
}
However, because the value of someObject might change before the closure is executed, I'm now adding a closure list as follows:
// 2nd attempt at rewriting
jobsArray.append { [someObject] in
doExpensiveOperation(someObject!.property)
}
Hopefully then if, say, someObject is subsequently set to nil before the closure executes, the closure will still access the intended instance.
But what's the neatest way to deal with the possibility that the value of .property might change before the closure is executed? Is there a better way than this?
// 3rd attempt at rewriting
let p = someObject!.property
jobsArray.append {
doExpensiveOperation(p)
}
I'm not very keen on this solution because it means changing the original line of code. I'd prefer this but it doesn't work:
// 4th attempt at rewriting
jobsArray.append { [someObject!.property] in
doExpensiveOperation(someObject!.property)
}
Pretty new to Swift, so all guidance gratefully received. Thanks!
A capture list such as [someObject] is actually syntactic sugar for [someObject = someObject], where the right hand side can be an arbitrary expression that gets bound to a new constant upon the closure being formed.
Therefore one option is to write your example as:
jobsArray.append { [property = someObject!.property] in
doExpensiveOperation(property)
}

UnsafeMutablePointer.pointee and didSet properties

I got some unexpected behavior using UnsafeMutablePointer on an observed property in a struct I created (on Xcode 10.1, Swift 4.2). See the following playground code:
struct NormalThing {
var anInt = 0
}
struct IntObservingThing {
var anInt: Int = 0 {
didSet {
print("I was just set to \(anInt)")
}
}
}
var normalThing = NormalThing(anInt: 0)
var ptr = UnsafeMutablePointer(&normalThing.anInt)
ptr.pointee = 20
print(normalThing.anInt) // "20\n"
var intObservingThing = IntObservingThing(anInt: 0)
var otherPtr = UnsafeMutablePointer(&intObservingThing.anInt)
// "I was just set to 0."
otherPtr.pointee = 20
print(intObservingThing.anInt) // "0\n"
Seemingly, modifying the pointee on an UnsafeMutablePointer to an observed property doesn't actually modify the value of the property. Also, the act of assigning the pointer to the property fires the didSet action. What am I missing here?
Any time you see a construct like UnsafeMutablePointer(&intObservingThing.anInt), you should be extremely wary about whether it'll exhibit undefined behaviour. In the vast majority of cases, it will.
First, let's break down exactly what's happening here. UnsafeMutablePointer doesn't have any initialisers that take inout parameters, so what initialiser is this calling? Well, the compiler has a special conversion that allows a & prefixed argument to be converted to a mutable pointer to the 'storage' referred to by the expression. This is called an inout-to-pointer conversion.
For example:
func foo(_ ptr: UnsafeMutablePointer<Int>) {
ptr.pointee += 1
}
var i = 0
foo(&i)
print(i) // 1
The compiler inserts a conversion that turns &i into a mutable pointer to i's storage. Okay, but what happens when i doesn't have any storage? For example, what if it's computed?
func foo(_ ptr: UnsafeMutablePointer<Int>) {
ptr.pointee += 1
}
var i: Int {
get { return 0 }
set { print("newValue = \(newValue)") }
}
foo(&i)
// prints: newValue = 1
This still works, so what storage is being pointed to by the pointer? To solve this problem, the compiler:
Calls i's getter, and places the resultant value into a temporary variable.
Gets a pointer to that temporary variable, and passes that to the call to foo.
Calls i's setter with the new value from the temporary.
Effectively doing the following:
var j = i // calling `i`'s getter
foo(&j)
i = j // calling `i`'s setter
It should hopefully be clear from this example that this imposes an important constraint on the lifetime of the pointer passed to foo – it can only be used to mutate the value of i during the call to foo. Attempting to escape the pointer and using it after the call to foo will result in a modification of only the temporary variable's value, and not i.
For example:
func foo(_ ptr: UnsafeMutablePointer<Int>) -> UnsafeMutablePointer<Int> {
return ptr
}
var i: Int {
get { return 0 }
set { print("newValue = \(newValue)") }
}
let ptr = foo(&i)
// prints: newValue = 0
ptr.pointee += 1
ptr.pointee += 1 takes place after i's setter has been called with the temporary variable's new value, therefore it has no effect.
Worse than that, it exhibits undefined behaviour, as the compiler doesn't guarantee that the temporary variable will remain valid after the call to foo has ended. For example, the optimiser could de-initialise it immediately after the call.
Okay, but as long as we only get pointers to variables that aren't computed, we should be able to use the pointer outside of the call it was passed to, right? Unfortunately not, turns out there's lots of other ways to shoot yourself in the foot when escaping inout-to-pointer conversions!
To name just a few (there are many more!):
A local variable is problematic for a similar reason to our temporary variable from earlier – the compiler doesn't guarantee that it will remain initialised until the end of the scope it's declared in. The optimiser is free to de-initialise it earlier.
For example:
func bar() {
var i = 0
let ptr = foo(&i)
// Optimiser could de-initialise `i` here.
// ... making this undefined behaviour!
ptr.pointee += 1
}
A stored variable with observers is problematic because under the hood it's actually implemented as a computed variable that calls its observers in its setter.
For example:
var i: Int = 0 {
willSet(newValue) {
print("willSet to \(newValue), oldValue was \(i)")
}
didSet(oldValue) {
print("didSet to \(i), oldValue was \(oldValue)")
}
}
is essentially syntactic sugar for:
var _i: Int = 0
func willSetI(newValue: Int) {
print("willSet to \(newValue), oldValue was \(i)")
}
func didSetI(oldValue: Int) {
print("didSet to \(i), oldValue was \(oldValue)")
}
var i: Int {
get {
return _i
}
set {
willSetI(newValue: newValue)
let oldValue = _i
_i = newValue
didSetI(oldValue: oldValue)
}
}
A non-final stored property on classes is problematic as it can be overridden by a computed property.
And this isn't even considering cases that rely on implementation details within the compiler.
For this reason, the compiler only guarantees stable and unique pointer values from inout-to-pointer conversions on stored global and static stored variables without observers. In any other case, attempting to escape and use a pointer from an inout-to-pointer conversion after the call it was passed to will lead to undefined behaviour.
Okay, but how does my example with the function foo relate to your example of calling an UnsafeMutablePointer initialiser? Well, UnsafeMutablePointer has an initialiser that takes an UnsafeMutablePointer argument (as a result of conforming to the underscored _Pointer protocol which most standard library pointer types conform to).
This initialiser is effectively same as the foo function – it takes an UnsafeMutablePointer argument and returns it. Therefore when you do UnsafeMutablePointer(&intObservingThing.anInt), you're escaping the pointer produced from the inout-to-pointer conversion – which, as we've discussed, is only valid if it's used on a stored global or static variable without observers.
So, to wrap things up:
var intObservingThing = IntObservingThing(anInt: 0)
var otherPtr = UnsafeMutablePointer(&intObservingThing.anInt)
// "I was just set to 0."
otherPtr.pointee = 20
is undefined behaviour. The pointer produced from the inout-to-pointer conversion is only valid for the duration of the call to UnsafeMutablePointer's initialiser. Attempting to use it afterwards results in undefined behaviour. As matt demonstrates, if you want scoped pointer access to intObservingThing.anInt, you want to use withUnsafeMutablePointer(to:).
I'm actually currently working on implementing a warning (which will hopefully transition to an error) that will be emitted on such unsound inout-to-pointer conversions. Unfortunately I haven't had much time lately to work on it, but all things going well, I'm aiming to start pushing it forwards in the new year, and hopefully get it into a Swift 5.x release.
In addition, it's worth noting that while the compiler doesn't currently guarantee well-defined behaviour for:
var normalThing = NormalThing(anInt: 0)
var ptr = UnsafeMutablePointer(&normalThing.anInt)
ptr.pointee = 20
From the discussion on #20467, it looks like this will likely be something that the compiler does guarantee well-defined behaviour for in a future release, due to the fact that the base (normalThing) is a fragile stored global variable of a struct without observers, and anInt is a fragile stored property without observers.
I'm pretty sure the problem is that what you're doing is illegal. You can't just declare an unsafe pointer and claim that it points at the address of a struct property. (In fact, I don't even understand why your code compiles in the first place; what initializer does the compiler think this is?) The correct way, which gives the expected results, is to ask for a pointer that does point at that address, like this:
struct IntObservingThing {
var anInt: Int = 0 {
didSet {
print("I was just set to \(anInt)")
}
}
}
withUnsafeMutablePointer(to: &intObservingThing.anInt) { ptr -> Void in
ptr.pointee = 20 // I was just set to 20
}
print(intObservingThing.anInt) // 20

Swift semantics regarding dictionary access

I'm currently reading the excellent Advanced Swift book from objc.io, and I'm running into something that I don't understand.
If you run the following code in a playground, you will notice that when modifying a struct contained in a dictionary a copy is made by the subscript access, but then it appears that the original value in the dictionary is replaced by the copy. I don't understand why. What exactly is happening ?
Also, is there a way to avoid the copy ? According to the author of the book, there isn't, but I just want to be sure.
import Foundation
class Buffer {
let id = UUID()
var value = 0
func copy() -> Buffer {
let new = Buffer()
new.value = self.value
return new
}
}
struct COWStruct {
var buffer = Buffer()
init() { print("Creating \(buffer.id)") }
mutating func change() -> String {
if isKnownUniquelyReferenced(&buffer) {
buffer.value += 1
return "No copy \(buffer.id)"
} else {
let newBuffer = buffer.copy()
newBuffer.value += 1
buffer = newBuffer
return "Copy \(buffer.id)"
}
}
}
var array = [COWStruct()]
array[0].buffer.value
array[0].buffer.id
array[0].change()
array[0].buffer.value
array[0].buffer.id
var dict = ["key": COWStruct()]
dict["key"]?.buffer.value
dict["key"]?.buffer.id
dict["key"]?.change()
dict["key"]?.buffer.value
dict["key"]?.buffer.id
// If the above `change()` was made on a copy, why has the original value changed ?
// Did the copied & modified struct replace the original struct in the dictionary ?
dict["key"]?.change() // Copy
is semantically equivalent to:
if var value = dict["key"] {
value.change() // Copy
dict["key"] = value
}
The value is pulled out of the dictionary, unwrapped into a temporary, mutated, and then placed back into the dictionary.
Because there's now two references to the underlying buffer (one from our local temporary value, and one from the COWStruct instance in the dictionary itself) – we're forcing a copy of the underlying Buffer instance, as it's no longer uniquely referenced.
So, why doesn't
array[0].change() // No Copy
do the same thing? Surely the element should be pulled out of the array, mutated and then stuck back in, replacing the previous value?
The difference is that unlike Dictionary's subscript which comprises of a getter and setter, Array's subscript comprises of a getter and a special accessor called mutableAddressWithPinnedNativeOwner.
What this special accessor does is return a pointer to the element in the array's underlying buffer, along with an owner object to ensure that the buffer isn't deallocated from under the caller. Such an accessor is called an addressor, as it deals with addresses.
Therefore when you say:
array[0].change()
you're actually mutating the actual element in the array directly, rather than a temporary.
Such an addressor cannot be directly applied to Dictionary's subscript because it returns an Optional, and the underlying value isn't stored as an optional. So it currently has to be unwrapped with a temporary, as we cannot return a pointer to the value in storage.
In Swift 3, you can avoid copying your COWStruct's underlying Buffer by removing the value from the dictionary before mutating the temporary:
if var value = dict["key"] {
dict["key"] = nil
value.change() // No Copy
dict["key"] = value
}
As now only the temporary has a view onto the underlying Buffer instance.
And, as #dfri points out in the comments, this can be reduced down to:
if var value = dict.removeValue(forKey: "key") {
value.change() // No Copy
dict["key"] = value
}
saving on a hashing operation.
Additionally, for convenience, you may want to consider making this into an extension method:
extension Dictionary {
mutating func withValue<R>(
forKey key: Key, mutations: (inout Value) throws -> R
) rethrows -> R? {
guard var value = removeValue(forKey: key) else { return nil }
defer {
updateValue(value, forKey: key)
}
return try mutations(&value)
}
}
// ...
dict.withValue(forKey: "key") {
$0.change() // No copy
}
In Swift 4, you should be able to use the values property of Dictionary in order to perform a direct mutation of the value:
if let index = dict.index(forKey: "key") {
dict.values[index].change()
}
As the values property now returns a special Dictionary.Values mutable collection that has a subscript with an addressor (see SE-0154 for more info on this change).
However, currently (with the version of Swift 4 that ships with Xcode 9 beta 5), this still makes a copy. This is due to the fact that both the Dictionary and Dictionary.Values instances have a view onto the underlying buffer – as the values computed property is just implemented with a getter and setter that passes around a reference to the dictionary's buffer.
So when calling the addressor, a copy of the dictionary's buffer is triggered, therefore leading to two views onto COWStruct's Buffer instance, therefore triggering a copy of it upon change() being called.
I have filed a bug over this here. (Edit: This has now been fixed on master with the unofficial introduction of generalised accessors using coroutines, so will be fixed in Swift 5 – see below for more info).
In Swift 4.1, Dictionary's subscript(_:default:) now uses an addressor, so we can efficiently mutate values so long as we supply a default value to use in the mutation.
For example:
dict["key", default: COWStruct()].change() // No copy
The default: parameter uses #autoclosure such that the default value isn't evaluated if it isn't needed (such as in this case where we know there's a value for the key).
Swift 5 and beyond
With the unofficial introduction of generalised accessors in Swift 5, two new underscored accessors have been introduced, _read and _modify which use coroutines in order to yield a value back to the caller. For _modify, this can be an arbitrary mutable expression.
The use of coroutines is exciting because it means that a _modify accessor can now perform logic both before and after the mutation. This allows them to be much more efficient when it comes to copy-on-write types, as they can for example deinitialise the value in storage while yielding a temporary mutable copy of the value that's uniquely referenced to the caller (and then reinitialising the value in storage upon control returning to the callee).
The standard library has already updated many previously inefficient APIs to make use of the new _modify accessor – this includes Dictionary's subscript(_:) which can now yield a uniquely referenced value to the caller (using the deinitialisation trick I mentioned above).
The upshot of these changes means that:
dict["key"]?.change() // No copy
will be able to perform an mutation of the value without having to make a copy in Swift 5 (you can even try this out for yourself with a master snapshot).

Can a condition be used to determine the type of a generic?

I will first explain what I'm trying to do and how I got to where I got stuck before getting to the question.
As a learning exercise for myself, I took some problems that I had already solved in Objective-C to see how I can solve them differently with Swift. The specific case that I got stuck on is a small piece that captures a value before and after it changes and interpolates between the two to create keyframes for an animation.
For this I had an object Capture with properties for the object, the key path and two id properties for the values before and after. Later, when interpolating the captured values I made sure that they could be interpolated by wrapping each of them in a Value class that used a class cluster to return an appropriate class depending on the type of value it wrapped, or nil for types that wasn't supported.
This works, and I am able to make it work in Swift as well following the same pattern, but it doesn't feel Swift like.
What worked
Instead of wrapping the captured values as a way of enabling interpolation, I created a Mixable protocol that the types could conform to and used a protocol extension for when the type supported the necessary basic arithmetic:
protocol SimpleArithmeticType {
func +(lhs: Self, right: Self) -> Self
func *(lhs: Self, amount: Double) -> Self
}
protocol Mixable {
func mix(with other: Self, by amount: Double) -> Self
}
extension Mixable where Self: SimpleArithmeticType {
func mix(with other: Self, by amount: Double) -> Self {
return self * (1.0 - amount) + other * amount
}
}
This part worked really well and enforced homogeneous mixing (that a type could only be mixed with its own type), which wasn't enforced in the Objective-C implementation.
Where I got stuck
The next logical step, and this is where I got stuck, seemed to be to make each Capture instance (now a struct) hold two variables of the same mixable type instead of two AnyObject. I also changed the initializer argument from being an object and a key path to being a closure that returns an object ()->T
struct Capture<T: Mixable> {
typealias Evaluation = () -> T
let eval: Evaluation
let before: T
var after: T {
return eval()
}
init(eval: Evaluation) {
self.eval = eval
self.before = eval()
}
}
This works when the type can be inferred, for example:
let captureInt = Capture {
return 3.0
}
// > Capture<Double>
but not with key value coding, which return AnyObject:\
let captureAnyObject = Capture {
return myObject.valueForKeyPath("opacity")!
}
error: cannot invoke initializer for type 'Capture' with an argument list of type '(() -> _)'
AnyObject does not conform to the Mixable protocol, so I can understand why this doesn't work. But I can check what type the object really is, and since I'm only covering a handful of mixable types, I though I could cover all the cases and return the correct type of Capture. Too see if this could even work I made an even simpler example
A simpler example
struct Foo<T> {
let x: T
init(eval: ()->T) {
x = eval()
}
}
which works when type inference is guaranteed:
let fooInt = Foo {
return 3
}
// > Foo<Int>
let fooDouble = Foo {
return 3.0
}
// > Foo<Double>
But not when the closure can return different types
let condition = true
let foo = Foo {
if condition {
return 3
} else {
return 3.0
}
}
error: cannot invoke initializer for type 'Foo' with an argument list of type '(() -> _)'
I'm not even able to define such a closure on its own.
let condition = true // as simple as it could be
let evaluation = {
if condition {
return 3
} else {
return 3.0
}
}
error: unable to infer closure type in the current context
My Question
Is this something that can be done at all? Can a condition be used to determine the type of a generic? Or is there another way to hold two variables of the same type, where the type was decided based on a condition?
Edit
What I really want is to:
capture the values before and after a change and save the pair (old + new) for later (a heterogeneous collection of homogeneous pairs).
go through all the collected values and get rid of the ones that can't be interpolated (unless this step could be integrated with the collection step)
interpolate each homogeneous pair individually (mixing old + new).
But it seems like this direction is a dead end when it comes to solving that problem. I'll have to take a couple of steps back and try a different approach (and probably ask a different question if I get stuck again).
As discussed on Twitter, the type must be known at compile time. Nevertheless, for the simple example at the end of the question you could just explicitly type
let evaluation: Foo<Double> = { ... }
and it would work.
So in the case of Capture and valueForKeyPath: IMHO you should cast (either safely or with a forced cast) the value to the Mixable type you expect the value to be and it should work fine. Afterall, I'm not sure valueForKeyPath: is supposed to return different types depending on a condition.
What is the exact case where you would like to return 2 totally different types (that can't be implicitly casted as in the simple case of Int and Double above) in the same evaluation closure?
in my full example I also have cases for CGPoint, CGSize, CGRect, CATransform3D
The limitations are just as you have stated, because of Swift's strict typing. All types must be definitely known at compile time, and each thing can be of only one type - even a generic (it is resolved by the way it is called at compile time). Thus, the only thing you can do is turn your type into into an umbrella type that is much more like Objective-C itself:
let condition = true
let evaluation = {
() -> NSObject in // *
if condition {
return 3
} else {
return NSValue(CGPoint:CGPointMake(0,1))
}
}

How to handle initial nil value for reduce functions

I would like to learn and use more functional programming in Swift. So, I've been trying various things in playground. I don't understand Reduce, though. The basic textbook examples work, but I can't get my head around this problem.
I have an array of strings called "toDoItems". I would like to get the longest string in this array. What is the best practice for handling the initial nil value in such cases? I think this probably happens often. I thought of writing a custom function and use it.
func optionalMax(maxSofar: Int?, newElement: Int) -> Int {
if let definiteMaxSofar = maxSofar {
return max(definiteMaxSofar, newElement)
}
return newElement
}
// Just testing - nums is an array of Ints. Works.
var maxValueOfInts = nums.reduce(0) { optionalMax($0, $1) }
// ERROR: cannot invoke 'reduce' with an argument list of type ‘(nil, (_,_)->_)'
var longestOfStrings = toDoItems.reduce(nil) { optionalMax(count($0), count($1)) }
It might just be that Swift does not automatically infer the type of your initial value. Try making it clear by explicitly declaring it:
var longestOfStrings = toDoItems.reduce(nil as Int?) { optionalMax($0, count($1)) }
By the way notice that I do not count on $0 (your accumulator) since it is not a String but an optional Int Int?
Generally to avoid confusion reading the code later, I explicitly label the accumulator as a and the element coming in from the serie as x:
var longestOfStrings = toDoItems.reduce(nil as Int?) { a, x in optionalMax(a, count(x)) }
This way should be clearer than $0 and $1 in code when the accumulator or the single element are used.
Hope this helps
Initialise it with an empty string "" rather than nil. Or you could even initialise it with the first element of the array, but an empty string seems better.
Second go at this after writing some wrong code, this will return the longest string if you are happy with an empty string being returned for an empty array:
toDoItems.reduce("") { count($0) > count($1) ? $0 : $1 }
Or if you want nil, use
toDoItems.reduce(nil as String?) { count($0!) > count($1) ? $0 : $1 }
The problem is that the compiler cannot infer the types you are using for your seed and accumulator closure if you seed with nil, and you also need to get the optional type correct when using the optional string as $0.