Swift: How to assign a variable by reference, not by value? - swift

I'm trying to get a reference to an Array and make modifications to it. Because Arrays in Swift are value types, instead of reference types, if I assign my array to a variable first, I am getting a copy of the array instead of the actual array:
var odds = ["1", "3", "5"]
var evens = ["2", "4", "6"]
var source = odds
var destination = evens
var one = odds.first!
source.removeFirst() // only removes the first element of the `source` array, not the `odds` array
destination.append(one)
When we look at the odds and evens arrays, they are unaltered because we changed the source and destination arrays.
I know that I can use the inout parameter attribute on a function to pass them by reference, instead of by value:
func move(inout source: [String], inout destination: [String], value:String) {
source.removeAtIndex(source.indexOf(value)!)
destination.append(value)
}
move(&odds, destination: &evens, value:one)
Is there a way to assign these arrays to a variable by reference, instead of by value?

Array is a struct, which means it's a value type in Swift. Because of this, arrays always behave according to value and not reference semantics. The problem here is that you're attempting to use mutable, reference based logic to operate on values types.
You don't want to rely on mutations occurring inside the function to propagate back to the caller. As you've found, this is only possible with inout parameters. What you should do instead is return the mutated array from the function back to the caller. The point of value oriented programming is that it shouldn't matter which array you have, but rather that any two equivalent arrays or values types are interchangeable.
It's slightly easier to imagine with another value type. Take an Int for example, and this function that does some math.
func addFive(int: Int) -> Int {
return int + 5
}
Now consider a similar function, but written in the reference oriented style that you're attempting to use:
func addFive(inout int: Int) {
int = int + 5
}
You can see it's simply not natural to operate on value types this way. Instead just return the updated value (the modified arrays) from your function and carry on from there.
Here is your function refactored with value semantics.
func move(source: [String], destination: [String], value:String) -> ([String], [String]) {
var mutableSource = source
var mutableDestination = destination
mutableSource.removeAtIndex(source.indexOf(value)!)
mutableDestination.append(value)
return (mutableSource, mutableDestination)
}
let (updatedSource, updatedDestination) = move(odds, destination: evens, value:one)

You cannot assign an array to a variable by reference in Swift.
"In Swift, Array, String, and Dictionary are all value types..."
Source: https://developer.apple.com/swift/blog/?id=10

If you need arrays that can be manipulated by reference you can create a class that encapsulates an array and use it for your variables.
here's an example:
class ArrayRef<Element>:CustomStringConvertible
{
var array:[Element]=[]
init() {}
init(Type:Element.Type) {}
init(fromArray:[Element]) { array = fromArray }
init(_ values:Element ...) { array = values }
var count:Int { return array.count }
// allow use of subscripts to manipulate elements
subscript (index:Int) -> Element
{
get { return array[index] }
set { array[index] = newValue }
}
// allow short syntax to access array content
// example: myArrayRef[].map({ $0 + "*" })
subscript () -> [Element]
{
get { return array }
set { array = newValue }
}
// allow printing the array example: print(myArrayRef)
var description:String { return "\(array)" }
// delegate append method to internal array
func append(newElement: Element)
{ array.append(newElement) }
// add more delegation to array methods as needed ...
}
// being an object, the ArrayRef class is always passed as a reference
func modifyArray(X:ArrayRef<String>)
{
X[2] = "Modified"
}
var a = ArrayRef("A","B","C")
modifyArray(a)
print(a) // --> a is now ["A", "B", "Modified"]
// various means of declaration ...
var b = ArrayRef<String>()
b[] = ["1","2","3"]
var c = ArrayRef(fromArray:b[])
// use .array to modify array content (or implement delegation in the class)
c.array += a[] + ["X","Y","Z"]
Note that you could also define your arrays as NSMutableArrays which are Obj-C classes and are passed by reference. It does a similar thing and does present differences with a regular array for the methods that are available.

I recommend this the following only for didactic purpose only, I advise against using it in production code.
You can circulate a "reference" to something via an UnsafePointer instance.
class Ref<T> {
private var ptr: UnsafePointer<T>!
var value: T { return ptr.pointee }
init(_ value: inout T) {
withUnsafePointer(to: &value) { ptr = $0 }
}
}
var a = ["1"]
var ref = Ref(&a)
print(a, ref.value) // ["1"] ["1"]
a.append("2")
print(a, ref.value) // ["1", "2"] ["1", "2"]
ref.value.removeFirst()
print(a, ref.value) // ["2"] ["2"]
Thus, you can simulate a reference to a variable via the above class, which stores a pointer to the given variable reference.
Please note that this is a simple use case, and will behave as expected only if if the variable doesn't get destroyed before the pointer, as in that case the memory initially occupied by the variable will be replaced by something else, and the unsafe pointer will no longer be valid. Take for example the next code:
var ref: Ref<[String]>!
// adding an inner scope to simulate `a` being destroyed
do {
var a: [String] = ["a"]
ref = Ref(&a)
print(a, ref.value)
a = ["b"]
print(a, ref.value)
}
// `a` was destroyed, however it's place on the stack was taken by `b`
var b: [String:Int] = ["a": 1]
// however `b` is not an array, thus the next line will crash
print(ref.value)

Related

Why does this compile without mutating keyword?

I feel like something is broken with the value semantics here. Consider:
struct A {
var b = 0
mutating func changeB() {
b = 6
}
}
struct B {
var arr = [A]()
func changeArr() {
/* as expected this won't compile
unlss changeArr() is mutating:
arr.append(A())
*/
// but this compiles! despite that changeB is mutating!
for var a in arr {
a.changeB()
}
}
}
Why can this example mutate the struct contents without marking the function as mutating? In true value semantics, any time you change any part of the value, the whole value should be considered changed, and this is usually the behavior in Swift, but in this example it is not. Further, adding a didSet observer to var arr reveals that changeArr is not considered mutation of the value.
for var a in arr {
a.changeB()
}
This is copying an element from arr out into a and leaving arr unchanged.
If you directly access the elements inside arr via their indexes, then it will mutate and require the mutating keyword.
The reason changeArr is not mutating is because it isn't really doing anything since it is working on local copies of the A objects. If you really want the method to do something meaningful it needs to be changed to
mutating func changeArrForReal() {
for index in arr.indices {
arr[index].changeB()
}
}

How to return a dictionary by reference in Swift?

In the following example, you can see that the dictionary in the myStruct instance is not returned by reference in the getDictionary() function. Therefore, any changes made to the returned dictionary is only made to the copy. How can you return the dictionary by reference?
struct myStruct {
func getDictionary() -> [Int:String] {
return dictionary
}
private var dictionary = [1:"one"]
}
let c = myStruct()
var sameDict = c.getDictionary()
sameDict[2] = "two"
print(c.getDictionary(), sameDict)
[1: "one"] [1: "one", 2: "two"]
Dictionary is a value type, it is not your choice to do some type of data structure to be reference or value, it is a Swift's choice. Only closure, class and functions can be used as reference
In Swift, Array, String, and Dictionary
https://developer.apple.com/swift/blog/?id=10
Because the Dictionary is a struct type, the only way to do this is by passing it in a function using inout keyword (indicates that the parameter will be changed) like this:
struct MyStruct {
func getDictionary() -> [Int: String] {
return dictionary
}
mutating func updateDictionary(block: (inout [Int: String]) -> Void) {
block(&dictionary)
}
private var dictionary = [1:"one"]
}
var c = MyStruct()
c.updateDictionary {
$0[2] = "two"
}
print(c.getDictionary())
Update: After modification of the copy inside the function, before the return, the modified copy WILL assign to the global variable. #AlexanderNikolaychuk and #matt pointed out that in the comments. The behavior can be seen if you run the following code in a Playground:
struct MyStruct {
var some = 1
}
var myStruct = MyStruct() {
didSet {
print("didSet")
}
}
func pass(something: inout MyStruct) {
something.some = 2
print("After change")
}
pass(something: &myStruct)
this will print:
After change
didSet
Just saying.
It seems that you did not quite understand the difference between struct and class.
When you initialise the struct and assign it to c you have your first copy of it. Then you initialise a new variable, calling it sameDict and copying the value of the c dictionary to it. Then you modify the copy called sameDict. The dictionary of c is still the same.
Check this doc:
https://docs.swift.org/swift-book/LanguageGuide/ClassesAndStructures.html
Stuct's are passed around by copying them. classes get referenced.

How to count the number of dimensions in Swift array [duplicate]

Suppose I have some function that I want to populate my data structure using a multi-dimensional array (e.g. a Tensor class):
class Tensor {
init<A>(array:A) { /* ... */ }
}
while I could add in a shape parameter, I would prefer to automatically calculate the dimensions from the array itself. If you know apriori the dimensions, it's trivial to read it off:
let d1 = array.count
let d2 = array[0].count
However, it's less clear how to do it for an N-dimensional array. I was thinking there might be a way to do it by extending the Array class:
extension Int {
func numberOfDims() -> Int {
return 0
}
}
extension Array {
func numberOfDims() -> Int {
return 1+Element.self.numberOfDims()
}
}
Unfortunately, this won't (rightfully so) compile, as numberOfDims isn't defined for most types. However, I'm don't see any way of constraining Element, as Arrays-of-Arrays make things complicated.
I was hoping someone else might have some insight into how to solve this problem (or explain why this is impossible).
If you're looking to get the depth of a nested array (Swift's standard library doesn't technically provide you with multi-dimensional arrays, only jagged arrays) – then, as shown in this Q&A, you can use a 'dummy protocol' and typecasting.
protocol _Array {
var nestingDepth: Int { get }
}
extension Array : _Array {
var nestingDepth: Int {
return 1 + ((first as? _Array)?.nestingDepth ?? 0)
}
}
let a = [1, 2, 3]
print(a.nestingDepth) // 1
let b = [[1], [2, 3], [4]]
print(b.nestingDepth) // 2
let c = [[[1], [2]], [[3]], [[4], [5]]]
print(c.nestingDepth) // 3
(I believe this approach would've still worked when you had originally posted the question)
In Swift 3, this can also be achieved without a dummy protocol, but instead by casting to [Any]. However, as noted in the linked Q&A, this is inefficient as it requires traversing the entire array in order to box each element in an existential container.
Also note that this implementation assumes that you're calling it on a homogenous nested array. As Paul notes, it won't give a correct answer for [[[1], 2], 3].
If this needs to be accounted for, you could write a recursive method which will iterate through each of the nested arrays and returning the minimum depth of the nesting.
protocol _Array {
func _nestingDepth(minimumDepth: Int?, currentDepth: Int) -> Int
}
extension Array : _Array {
func _nestingDepth(minimumDepth: Int?, currentDepth: Int) -> Int {
// for an empty array, the minimum depth is the current depth, as we know
// that _nestingDepth is called where currentDepth <= minimumDepth.
guard !isEmpty else { return currentDepth }
var minimumDepth = minimumDepth
for element in self {
// if current depth has exceeded minimum depth, then return the minimum.
// this allows for the short-circuiting of the function.
if let minimumDepth = minimumDepth, currentDepth >= minimumDepth {
return minimumDepth
}
// if element isn't an array, then return the current depth as the new minimum,
// given that currentDepth < minimumDepth.
guard let element = element as? _Array else { return currentDepth }
// get the new minimum depth from the next nesting,
// and incrementing the current depth.
minimumDepth = element._nestingDepth(minimumDepth: minimumDepth,
currentDepth: currentDepth + 1)
}
// the force unwrap is safe, as we know array is non-empty, therefore minimumDepth
// has been assigned at least once.
return minimumDepth!
}
var nestingDepth: Int {
return _nestingDepth(minimumDepth: nil, currentDepth: 1)
}
}
let a = [1, 2, 3]
print(a.nestingDepth) // 1
let b = [[1], [2], [3]]
print(b.nestingDepth) // 2
let c = [[[1], [2]], [[3]], [[5], [6]]]
print(c.nestingDepth) // 3
let d: [Any] = [ [[1], [2], [[3]] ], [[4]], [5] ]
print(d.nestingDepth) // 2 (the minimum depth is at element [5])
Great question that sent me off on a goose chase!
To be clear: I’m talking below about the approach of using the outermost array’s generic type parameter to compute the number of dimensions. As Tyrelidrel shows, you can recursively examine the runtime type of the first element — although this approach gives nonsensical answers for heterogenous arrays like [[[1], 2], 3].
Type-based dispatch can’t work
As you note, your code as written doesn’t work because numberOfDims is not defined for all types. But is there a workaround? Does this direction lead somewhere?
No, it’s a dead end. The reason is that extension methods are statically dispatched for non-class types, as the following snippet demonstrates:
extension CollectionType {
func identify() {
print("I am a collection of some kind")
}
func greetAndIdentify() {
print("Hello!")
identify()
}
}
extension Array {
func identify() {
print("I am an array")
}
}
[1,2,3].identify() // prints "I am an array"
[1,2,3].greetAndIdentify() // prints "Hello!" and "I am a collection of some kind"
Even if Swift allowed you to extend Any (and it doesn’t), Element.self.numberOfDims() would always call the Any implementation of numberOfDims() even if the runtime type of Element.self were an Array.
This crushing static dispatch limitation means that even this promising-looking approach fails (it compiles, but always returns 1):
extension CollectionType {
var numberOfDims: Int {
return self.dynamicType.numberOfDims
}
static var numberOfDims: Int {
return 1
}
}
extension CollectionType where Generator.Element: CollectionType {
static var numberOfDims: Int {
return 1 + Generator.Element.numberOfDims
}
}
[[1],[2],[3]].numberOfDims // return 1 ... boooo!
This same constraint also applies to function overloading.
Type inspection can’t work
If there’s a way to make it work, it would be something along these lines, which uses a conditional instead of type-based method dispatch to traverse the nested array types:
extension Array {
var numberOfDims: Int {
return self.dynamicType.numberOfDims
}
static var numberOfDims: Int {
if let nestedArrayType = Generator.Element.self as? Array.Type {
return 1 + nestedArrayType.numberOfDims
} else {
return 1
}
}
}
[[1,2],[2],[3]].numberOfDims
The code above compiles — quite confusingly — because Swift takes Array.Type to be a shortcut for Array<Element>.Type. That completely defeats the attempt to unwrap.
What’s the workaround? There isn’t one. This approach can’t work because we need to say “if Element is some kind of Array,” but as far as I know, there’s no way in Swift to say “array of anything,” or “just the Array type regardless of Element.”
Everywhere you mention the Array type, its generic type parameter must be materialized to a concrete type or a protocol at compile time.
Cheating can work
What about reflection, then? There is a way. Not a nice way, but there is a way. Swift’s Mirror is currently not powerful enough to tell us what the element type is, but there is another reflection method that is powerful enough: converting the type to a string.
private let arrayPat = try! NSRegularExpression(pattern: "Array<", options: [])
extension Array {
var numberOfDims: Int {
let typeName = "\(self.dynamicType)"
return arrayPat.numberOfMatchesInString(
typeName, options: [], range: NSMakeRange(0, typeName.characters.count))
}
}
Horrid, evil, brittle, probably not legal in all countries — but it works!
Unfortunately I was not able to do this with a Swift array but you can easily convert a swift array to an NSArray.
extension NSArray {
func numberOfDims() -> Int {
var count = 0
if let x = self.firstObject as? NSArray {
count += x.numberOfDims() + 1
} else {
return 1
}
return count
}
}

Swift for var list.enumerated()

I have an array of objects whose type is a struct with mutating functions. So I got this code:
for (index, object) in objects.enumerated() {
otherArray[index] = object.someMutatingFunction(...)
}
This leads me to this error Cannot use mutating member on immutable value of type 'Blabla' which I can fix by adding var:
for var (index, object) in objects.enumerated() {
otherArray[index] = object.someMutatingFunction(...)
}
But then I get another warning Variable 'index' was never mutated; consider changing to 'let' constant which I don't know how to fix elegantly. The only idea is too add a new var variable. Is there anything else I can do to prevent this warning?
Prefix the object variable with the var keyword:
struct S {
mutating func f() { }
}
let array = [S(), S()]
for (index, var object) in array.enumerated() {
object.f()
}
Note as Hamish points out in the comment to this answer that the elements of the array will not be modified. Only the local copy of object inside the scope of the for loop can be modified.
If you want to modify array you have to declare it var outside the scope of the for loop, then assign to array indices.

Calculate the number of dimensions of a multi-dimensional array in Swift

Suppose I have some function that I want to populate my data structure using a multi-dimensional array (e.g. a Tensor class):
class Tensor {
init<A>(array:A) { /* ... */ }
}
while I could add in a shape parameter, I would prefer to automatically calculate the dimensions from the array itself. If you know apriori the dimensions, it's trivial to read it off:
let d1 = array.count
let d2 = array[0].count
However, it's less clear how to do it for an N-dimensional array. I was thinking there might be a way to do it by extending the Array class:
extension Int {
func numberOfDims() -> Int {
return 0
}
}
extension Array {
func numberOfDims() -> Int {
return 1+Element.self.numberOfDims()
}
}
Unfortunately, this won't (rightfully so) compile, as numberOfDims isn't defined for most types. However, I'm don't see any way of constraining Element, as Arrays-of-Arrays make things complicated.
I was hoping someone else might have some insight into how to solve this problem (or explain why this is impossible).
If you're looking to get the depth of a nested array (Swift's standard library doesn't technically provide you with multi-dimensional arrays, only jagged arrays) – then, as shown in this Q&A, you can use a 'dummy protocol' and typecasting.
protocol _Array {
var nestingDepth: Int { get }
}
extension Array : _Array {
var nestingDepth: Int {
return 1 + ((first as? _Array)?.nestingDepth ?? 0)
}
}
let a = [1, 2, 3]
print(a.nestingDepth) // 1
let b = [[1], [2, 3], [4]]
print(b.nestingDepth) // 2
let c = [[[1], [2]], [[3]], [[4], [5]]]
print(c.nestingDepth) // 3
(I believe this approach would've still worked when you had originally posted the question)
In Swift 3, this can also be achieved without a dummy protocol, but instead by casting to [Any]. However, as noted in the linked Q&A, this is inefficient as it requires traversing the entire array in order to box each element in an existential container.
Also note that this implementation assumes that you're calling it on a homogenous nested array. As Paul notes, it won't give a correct answer for [[[1], 2], 3].
If this needs to be accounted for, you could write a recursive method which will iterate through each of the nested arrays and returning the minimum depth of the nesting.
protocol _Array {
func _nestingDepth(minimumDepth: Int?, currentDepth: Int) -> Int
}
extension Array : _Array {
func _nestingDepth(minimumDepth: Int?, currentDepth: Int) -> Int {
// for an empty array, the minimum depth is the current depth, as we know
// that _nestingDepth is called where currentDepth <= minimumDepth.
guard !isEmpty else { return currentDepth }
var minimumDepth = minimumDepth
for element in self {
// if current depth has exceeded minimum depth, then return the minimum.
// this allows for the short-circuiting of the function.
if let minimumDepth = minimumDepth, currentDepth >= minimumDepth {
return minimumDepth
}
// if element isn't an array, then return the current depth as the new minimum,
// given that currentDepth < minimumDepth.
guard let element = element as? _Array else { return currentDepth }
// get the new minimum depth from the next nesting,
// and incrementing the current depth.
minimumDepth = element._nestingDepth(minimumDepth: minimumDepth,
currentDepth: currentDepth + 1)
}
// the force unwrap is safe, as we know array is non-empty, therefore minimumDepth
// has been assigned at least once.
return minimumDepth!
}
var nestingDepth: Int {
return _nestingDepth(minimumDepth: nil, currentDepth: 1)
}
}
let a = [1, 2, 3]
print(a.nestingDepth) // 1
let b = [[1], [2], [3]]
print(b.nestingDepth) // 2
let c = [[[1], [2]], [[3]], [[5], [6]]]
print(c.nestingDepth) // 3
let d: [Any] = [ [[1], [2], [[3]] ], [[4]], [5] ]
print(d.nestingDepth) // 2 (the minimum depth is at element [5])
Great question that sent me off on a goose chase!
To be clear: I’m talking below about the approach of using the outermost array’s generic type parameter to compute the number of dimensions. As Tyrelidrel shows, you can recursively examine the runtime type of the first element — although this approach gives nonsensical answers for heterogenous arrays like [[[1], 2], 3].
Type-based dispatch can’t work
As you note, your code as written doesn’t work because numberOfDims is not defined for all types. But is there a workaround? Does this direction lead somewhere?
No, it’s a dead end. The reason is that extension methods are statically dispatched for non-class types, as the following snippet demonstrates:
extension CollectionType {
func identify() {
print("I am a collection of some kind")
}
func greetAndIdentify() {
print("Hello!")
identify()
}
}
extension Array {
func identify() {
print("I am an array")
}
}
[1,2,3].identify() // prints "I am an array"
[1,2,3].greetAndIdentify() // prints "Hello!" and "I am a collection of some kind"
Even if Swift allowed you to extend Any (and it doesn’t), Element.self.numberOfDims() would always call the Any implementation of numberOfDims() even if the runtime type of Element.self were an Array.
This crushing static dispatch limitation means that even this promising-looking approach fails (it compiles, but always returns 1):
extension CollectionType {
var numberOfDims: Int {
return self.dynamicType.numberOfDims
}
static var numberOfDims: Int {
return 1
}
}
extension CollectionType where Generator.Element: CollectionType {
static var numberOfDims: Int {
return 1 + Generator.Element.numberOfDims
}
}
[[1],[2],[3]].numberOfDims // return 1 ... boooo!
This same constraint also applies to function overloading.
Type inspection can’t work
If there’s a way to make it work, it would be something along these lines, which uses a conditional instead of type-based method dispatch to traverse the nested array types:
extension Array {
var numberOfDims: Int {
return self.dynamicType.numberOfDims
}
static var numberOfDims: Int {
if let nestedArrayType = Generator.Element.self as? Array.Type {
return 1 + nestedArrayType.numberOfDims
} else {
return 1
}
}
}
[[1,2],[2],[3]].numberOfDims
The code above compiles — quite confusingly — because Swift takes Array.Type to be a shortcut for Array<Element>.Type. That completely defeats the attempt to unwrap.
What’s the workaround? There isn’t one. This approach can’t work because we need to say “if Element is some kind of Array,” but as far as I know, there’s no way in Swift to say “array of anything,” or “just the Array type regardless of Element.”
Everywhere you mention the Array type, its generic type parameter must be materialized to a concrete type or a protocol at compile time.
Cheating can work
What about reflection, then? There is a way. Not a nice way, but there is a way. Swift’s Mirror is currently not powerful enough to tell us what the element type is, but there is another reflection method that is powerful enough: converting the type to a string.
private let arrayPat = try! NSRegularExpression(pattern: "Array<", options: [])
extension Array {
var numberOfDims: Int {
let typeName = "\(self.dynamicType)"
return arrayPat.numberOfMatchesInString(
typeName, options: [], range: NSMakeRange(0, typeName.characters.count))
}
}
Horrid, evil, brittle, probably not legal in all countries — but it works!
Unfortunately I was not able to do this with a Swift array but you can easily convert a swift array to an NSArray.
extension NSArray {
func numberOfDims() -> Int {
var count = 0
if let x = self.firstObject as? NSArray {
count += x.numberOfDims() + 1
} else {
return 1
}
return count
}
}