How can I make an instance conform to "Comparable"? - swift

Did somebody have an idea how can I fix this?
In the following Line:
//Entry Image Shared
for shared in entryImagepostShared.sorted() {
postetImageShared.append(shared)
}
I get an Error Like: Referencing instance method sorted() on Sequence requires that Bool conform to Comparable
postetImages.removeAll()
postetImageComment.removeAll()
postetImageID.removeAll()
postetImageShared.removeAll()
for document in snapshot!.documents {
guard let entryImageURL = document["imageURL"] as? [String],
let entryImageComment = document["postText"] as? [String],
let entryImagepostShared = document["postShared"] as? [Bool] else { return }
//Entry Image URL's
for url in entryImageURL.sorted() {
postetImages.append(url)
}
let count = postetImageURL.count
imageURLCount += count
//Entry Image Comment
for comment in entryImageComment.sorted() {
postetImageComment.append(comment)
}
//Entry Image Shared
for shared in entryImagepostShared.sorted() {
postetImageShared.append(shared)
}
}

The Array's method sorted() requires that the array's values are comparable, i.e. there's a way to tell whether one value is less than another.
By default, the type Bool does not have this behavior.
If you need to sort an array of Bools, first you need to decide whether you want to see the false values first or the true values first.
Assuming the former, here are two ways to achieve this:
Use the sorted(by:) method which takes a closure. This closure takes two values and returns a Bool that indicates whether they are in increasing order:
let sorted = arrayOfBools.sorted { $0 == false || $1 == true }
Make the Bool conform to Comparable by implementing your own extension. Then you can just use sorted():
extension Bool: Comparable {
public static func < (lhs: Bool, rhs: Bool) -> Bool {
lhs == false || rhs == true
}
}
let sorted1 = arrayOfBools.sorted()

Related

Swift Generics and Protocols

Given the below, this will throw a compile time error about the protocol not being able to adhere to itself and only struct/enum can adhere to the protocol. This seems to defeat the purpose of being able to use protocols in generics. I'm trying to understand why this doesn't work, but if I remove the generic and just put the protocol where 'Z' is everything is fine. It seems antithetical to what protocols and generics should be allowed for.
**Edit for question clarity: I need to take a type of Any that can be cast to a dictionary of [String:MyProtocol] and pass it into the method printEm. printEm must use the generic as it will be instantiating instances of the class Z.
protocol MyProtocol {
init()
var whoAmI:String { get }
}
func genericPassing(unknownThing:Any) {
let knownThing = unknownThing as? [String:MyProtocol]
if(knownThing != nil){
self.printEm(knownThing)
}
}
func printEm<Z:MyProtocol>(theThings:[String:Z]) {
let zCollection:[Z] = []
for thing in theThings {
print(thing.whoAmI)
zCollection.append(Z())
}
}
**Edited printEm to illustrate why generic is needed.
** Edit for more complex code. The two primary requirements are the use of a generic to call Z() and the ability to take an Any and somehow type check it and/or cast it so that it can be used in a genericized method.
private func mergeUpdates<Z:RemoteDataSyncable>(source:inout Z, updates:[WritableKeyPath<Z, Any>:Any]) throws {
for key in updates.keys {
let value = updates[key]!
let valueDict = value as? [String:[WritableKeyPath<RemoteDataSyncable, Any>:Any]]
if(valueDict != nil) {
var currentValueArray = source[keyPath: key] as? [RemoteDataSyncable]
if(currentValueArray != nil) {
self.mergeUpdates(source: &currentValueArray!, updates: valueDict!)
}
else {
throw SyncError.TypeError
}
}
else {
source[keyPath: key] = value
}
}
}
private func mergeUpdates<Z:RemoteDataSyncable>(source:inout [Z], updates:[String:[WritableKeyPath<Z,Any>:Any]]) {
for key in updates.keys {
var currentObject = source.first { syncable -> Bool in
return syncable.identifier == key
}
if(currentObject != nil) {
try! self.mergeUpdates(source: &currentObject!, updates: updates[key]!)
}
else {
var newSyncable = Z()
try! self.mergeUpdates(source: &newSyncable, updates: updates[key]!)
source.append(newSyncable)
}
}
}
This is a perfect example of why protocols do not conform to themselves. In your code Z is MyProtocol, so Z() is MyProtocol(). How would that work? What type is it? How big is it? You then can't put them into a [Z], since they might be different types.
You mean to pass arbitrary MyProtocols and call init on the type of each element:
func printEm(theThings:[String: MyProtocol]) {
var zCollection:[MyProtocol] = []
for thing in theThings.values {
print(thing.whoAmI)
zCollection.append(type(of: thing).init())
}
}
When I suggested using closures, this is the kind of thing I mean. This Updater can accept arbitrary ReferenceWritableKeyPaths, and when you pass it Any update value, it'll assign it to every key path that can accept it. That's kind of useless, but shows the technique. (Keep in mind that the updater is retaining the object, so that may be a problem that you need to address.)
class Updater {
private(set) var updaters: [(Any) -> ()] = []
func add<Root, Value>(keyPath: ReferenceWritableKeyPath<Root, Value>, on root: Root) {
updaters.append { value in
if let value = value as? Value {
root[keyPath: keyPath] = value
}
}
}
func updateAll(with value: Any) {
for updater in updaters {
updater(value)
}
}
}
class Client {
var updateMe: Int = 0
}
let client = Client()
let updater = Updater()
updater.add(keyPath: \.updateMe, on: client)
updater.updateAll(with: 3)
client.updateMe // 3
The key lesson in this code is that the generic types on the add are erased (hidden) by the (Any) -> () closure at compile-time. And the runtime as? check is done inside that closure, where the types are all known.

How to count the number of dimensions in Swift array [duplicate]

Suppose I have some function that I want to populate my data structure using a multi-dimensional array (e.g. a Tensor class):
class Tensor {
init<A>(array:A) { /* ... */ }
}
while I could add in a shape parameter, I would prefer to automatically calculate the dimensions from the array itself. If you know apriori the dimensions, it's trivial to read it off:
let d1 = array.count
let d2 = array[0].count
However, it's less clear how to do it for an N-dimensional array. I was thinking there might be a way to do it by extending the Array class:
extension Int {
func numberOfDims() -> Int {
return 0
}
}
extension Array {
func numberOfDims() -> Int {
return 1+Element.self.numberOfDims()
}
}
Unfortunately, this won't (rightfully so) compile, as numberOfDims isn't defined for most types. However, I'm don't see any way of constraining Element, as Arrays-of-Arrays make things complicated.
I was hoping someone else might have some insight into how to solve this problem (or explain why this is impossible).
If you're looking to get the depth of a nested array (Swift's standard library doesn't technically provide you with multi-dimensional arrays, only jagged arrays) – then, as shown in this Q&A, you can use a 'dummy protocol' and typecasting.
protocol _Array {
var nestingDepth: Int { get }
}
extension Array : _Array {
var nestingDepth: Int {
return 1 + ((first as? _Array)?.nestingDepth ?? 0)
}
}
let a = [1, 2, 3]
print(a.nestingDepth) // 1
let b = [[1], [2, 3], [4]]
print(b.nestingDepth) // 2
let c = [[[1], [2]], [[3]], [[4], [5]]]
print(c.nestingDepth) // 3
(I believe this approach would've still worked when you had originally posted the question)
In Swift 3, this can also be achieved without a dummy protocol, but instead by casting to [Any]. However, as noted in the linked Q&A, this is inefficient as it requires traversing the entire array in order to box each element in an existential container.
Also note that this implementation assumes that you're calling it on a homogenous nested array. As Paul notes, it won't give a correct answer for [[[1], 2], 3].
If this needs to be accounted for, you could write a recursive method which will iterate through each of the nested arrays and returning the minimum depth of the nesting.
protocol _Array {
func _nestingDepth(minimumDepth: Int?, currentDepth: Int) -> Int
}
extension Array : _Array {
func _nestingDepth(minimumDepth: Int?, currentDepth: Int) -> Int {
// for an empty array, the minimum depth is the current depth, as we know
// that _nestingDepth is called where currentDepth <= minimumDepth.
guard !isEmpty else { return currentDepth }
var minimumDepth = minimumDepth
for element in self {
// if current depth has exceeded minimum depth, then return the minimum.
// this allows for the short-circuiting of the function.
if let minimumDepth = minimumDepth, currentDepth >= minimumDepth {
return minimumDepth
}
// if element isn't an array, then return the current depth as the new minimum,
// given that currentDepth < minimumDepth.
guard let element = element as? _Array else { return currentDepth }
// get the new minimum depth from the next nesting,
// and incrementing the current depth.
minimumDepth = element._nestingDepth(minimumDepth: minimumDepth,
currentDepth: currentDepth + 1)
}
// the force unwrap is safe, as we know array is non-empty, therefore minimumDepth
// has been assigned at least once.
return minimumDepth!
}
var nestingDepth: Int {
return _nestingDepth(minimumDepth: nil, currentDepth: 1)
}
}
let a = [1, 2, 3]
print(a.nestingDepth) // 1
let b = [[1], [2], [3]]
print(b.nestingDepth) // 2
let c = [[[1], [2]], [[3]], [[5], [6]]]
print(c.nestingDepth) // 3
let d: [Any] = [ [[1], [2], [[3]] ], [[4]], [5] ]
print(d.nestingDepth) // 2 (the minimum depth is at element [5])
Great question that sent me off on a goose chase!
To be clear: I’m talking below about the approach of using the outermost array’s generic type parameter to compute the number of dimensions. As Tyrelidrel shows, you can recursively examine the runtime type of the first element — although this approach gives nonsensical answers for heterogenous arrays like [[[1], 2], 3].
Type-based dispatch can’t work
As you note, your code as written doesn’t work because numberOfDims is not defined for all types. But is there a workaround? Does this direction lead somewhere?
No, it’s a dead end. The reason is that extension methods are statically dispatched for non-class types, as the following snippet demonstrates:
extension CollectionType {
func identify() {
print("I am a collection of some kind")
}
func greetAndIdentify() {
print("Hello!")
identify()
}
}
extension Array {
func identify() {
print("I am an array")
}
}
[1,2,3].identify() // prints "I am an array"
[1,2,3].greetAndIdentify() // prints "Hello!" and "I am a collection of some kind"
Even if Swift allowed you to extend Any (and it doesn’t), Element.self.numberOfDims() would always call the Any implementation of numberOfDims() even if the runtime type of Element.self were an Array.
This crushing static dispatch limitation means that even this promising-looking approach fails (it compiles, but always returns 1):
extension CollectionType {
var numberOfDims: Int {
return self.dynamicType.numberOfDims
}
static var numberOfDims: Int {
return 1
}
}
extension CollectionType where Generator.Element: CollectionType {
static var numberOfDims: Int {
return 1 + Generator.Element.numberOfDims
}
}
[[1],[2],[3]].numberOfDims // return 1 ... boooo!
This same constraint also applies to function overloading.
Type inspection can’t work
If there’s a way to make it work, it would be something along these lines, which uses a conditional instead of type-based method dispatch to traverse the nested array types:
extension Array {
var numberOfDims: Int {
return self.dynamicType.numberOfDims
}
static var numberOfDims: Int {
if let nestedArrayType = Generator.Element.self as? Array.Type {
return 1 + nestedArrayType.numberOfDims
} else {
return 1
}
}
}
[[1,2],[2],[3]].numberOfDims
The code above compiles — quite confusingly — because Swift takes Array.Type to be a shortcut for Array<Element>.Type. That completely defeats the attempt to unwrap.
What’s the workaround? There isn’t one. This approach can’t work because we need to say “if Element is some kind of Array,” but as far as I know, there’s no way in Swift to say “array of anything,” or “just the Array type regardless of Element.”
Everywhere you mention the Array type, its generic type parameter must be materialized to a concrete type or a protocol at compile time.
Cheating can work
What about reflection, then? There is a way. Not a nice way, but there is a way. Swift’s Mirror is currently not powerful enough to tell us what the element type is, but there is another reflection method that is powerful enough: converting the type to a string.
private let arrayPat = try! NSRegularExpression(pattern: "Array<", options: [])
extension Array {
var numberOfDims: Int {
let typeName = "\(self.dynamicType)"
return arrayPat.numberOfMatchesInString(
typeName, options: [], range: NSMakeRange(0, typeName.characters.count))
}
}
Horrid, evil, brittle, probably not legal in all countries — but it works!
Unfortunately I was not able to do this with a Swift array but you can easily convert a swift array to an NSArray.
extension NSArray {
func numberOfDims() -> Int {
var count = 0
if let x = self.firstObject as? NSArray {
count += x.numberOfDims() + 1
} else {
return 1
}
return count
}
}

Why I cannot access self in this particular method but am allowed in others?

I have seen this question on SO but the answers there appear to be talking about functions that return self.
I am creating a class extension that starts like this
extension Sequence where Element: Comparable {
func normalize() -> [Element] {
let count = self.count
}
}
I need to get the number of elements self.count and in subsequent lines use the array elements, like self[i] but Swift complains saying that self nas no member called count and will not let me use self in any context.
How do I do that?
In swift, the count property is not defined in Sequence, but in Collection, so you need to extend from Collection instead.
extension Collection where Element: Comparable {
func normalize() -> [Element] {
let count = self.count
}
}
If you also need to access a value of the collection by indexes (self[i]), you should extend RandomAccessCollection instead, which provide both count (because a random access collection is a collection) and subscript function.
extension RandomAccessCollection where Element: Comparable {
func normalize() -> [Element] {
let count = self.count
let first = self[startIndex]
let second = self[index(startIndex, offsetBy: 1)]
return [first, second]
}
}
Note: As RandomAccessCollection indexes are not necessarily an int, you must use the index(_:offsetBy:) function to create an index that can be passed in the subscript method.

Calculate the number of dimensions of a multi-dimensional array in Swift

Suppose I have some function that I want to populate my data structure using a multi-dimensional array (e.g. a Tensor class):
class Tensor {
init<A>(array:A) { /* ... */ }
}
while I could add in a shape parameter, I would prefer to automatically calculate the dimensions from the array itself. If you know apriori the dimensions, it's trivial to read it off:
let d1 = array.count
let d2 = array[0].count
However, it's less clear how to do it for an N-dimensional array. I was thinking there might be a way to do it by extending the Array class:
extension Int {
func numberOfDims() -> Int {
return 0
}
}
extension Array {
func numberOfDims() -> Int {
return 1+Element.self.numberOfDims()
}
}
Unfortunately, this won't (rightfully so) compile, as numberOfDims isn't defined for most types. However, I'm don't see any way of constraining Element, as Arrays-of-Arrays make things complicated.
I was hoping someone else might have some insight into how to solve this problem (or explain why this is impossible).
If you're looking to get the depth of a nested array (Swift's standard library doesn't technically provide you with multi-dimensional arrays, only jagged arrays) – then, as shown in this Q&A, you can use a 'dummy protocol' and typecasting.
protocol _Array {
var nestingDepth: Int { get }
}
extension Array : _Array {
var nestingDepth: Int {
return 1 + ((first as? _Array)?.nestingDepth ?? 0)
}
}
let a = [1, 2, 3]
print(a.nestingDepth) // 1
let b = [[1], [2, 3], [4]]
print(b.nestingDepth) // 2
let c = [[[1], [2]], [[3]], [[4], [5]]]
print(c.nestingDepth) // 3
(I believe this approach would've still worked when you had originally posted the question)
In Swift 3, this can also be achieved without a dummy protocol, but instead by casting to [Any]. However, as noted in the linked Q&A, this is inefficient as it requires traversing the entire array in order to box each element in an existential container.
Also note that this implementation assumes that you're calling it on a homogenous nested array. As Paul notes, it won't give a correct answer for [[[1], 2], 3].
If this needs to be accounted for, you could write a recursive method which will iterate through each of the nested arrays and returning the minimum depth of the nesting.
protocol _Array {
func _nestingDepth(minimumDepth: Int?, currentDepth: Int) -> Int
}
extension Array : _Array {
func _nestingDepth(minimumDepth: Int?, currentDepth: Int) -> Int {
// for an empty array, the minimum depth is the current depth, as we know
// that _nestingDepth is called where currentDepth <= minimumDepth.
guard !isEmpty else { return currentDepth }
var minimumDepth = minimumDepth
for element in self {
// if current depth has exceeded minimum depth, then return the minimum.
// this allows for the short-circuiting of the function.
if let minimumDepth = minimumDepth, currentDepth >= minimumDepth {
return minimumDepth
}
// if element isn't an array, then return the current depth as the new minimum,
// given that currentDepth < minimumDepth.
guard let element = element as? _Array else { return currentDepth }
// get the new minimum depth from the next nesting,
// and incrementing the current depth.
minimumDepth = element._nestingDepth(minimumDepth: minimumDepth,
currentDepth: currentDepth + 1)
}
// the force unwrap is safe, as we know array is non-empty, therefore minimumDepth
// has been assigned at least once.
return minimumDepth!
}
var nestingDepth: Int {
return _nestingDepth(minimumDepth: nil, currentDepth: 1)
}
}
let a = [1, 2, 3]
print(a.nestingDepth) // 1
let b = [[1], [2], [3]]
print(b.nestingDepth) // 2
let c = [[[1], [2]], [[3]], [[5], [6]]]
print(c.nestingDepth) // 3
let d: [Any] = [ [[1], [2], [[3]] ], [[4]], [5] ]
print(d.nestingDepth) // 2 (the minimum depth is at element [5])
Great question that sent me off on a goose chase!
To be clear: I’m talking below about the approach of using the outermost array’s generic type parameter to compute the number of dimensions. As Tyrelidrel shows, you can recursively examine the runtime type of the first element — although this approach gives nonsensical answers for heterogenous arrays like [[[1], 2], 3].
Type-based dispatch can’t work
As you note, your code as written doesn’t work because numberOfDims is not defined for all types. But is there a workaround? Does this direction lead somewhere?
No, it’s a dead end. The reason is that extension methods are statically dispatched for non-class types, as the following snippet demonstrates:
extension CollectionType {
func identify() {
print("I am a collection of some kind")
}
func greetAndIdentify() {
print("Hello!")
identify()
}
}
extension Array {
func identify() {
print("I am an array")
}
}
[1,2,3].identify() // prints "I am an array"
[1,2,3].greetAndIdentify() // prints "Hello!" and "I am a collection of some kind"
Even if Swift allowed you to extend Any (and it doesn’t), Element.self.numberOfDims() would always call the Any implementation of numberOfDims() even if the runtime type of Element.self were an Array.
This crushing static dispatch limitation means that even this promising-looking approach fails (it compiles, but always returns 1):
extension CollectionType {
var numberOfDims: Int {
return self.dynamicType.numberOfDims
}
static var numberOfDims: Int {
return 1
}
}
extension CollectionType where Generator.Element: CollectionType {
static var numberOfDims: Int {
return 1 + Generator.Element.numberOfDims
}
}
[[1],[2],[3]].numberOfDims // return 1 ... boooo!
This same constraint also applies to function overloading.
Type inspection can’t work
If there’s a way to make it work, it would be something along these lines, which uses a conditional instead of type-based method dispatch to traverse the nested array types:
extension Array {
var numberOfDims: Int {
return self.dynamicType.numberOfDims
}
static var numberOfDims: Int {
if let nestedArrayType = Generator.Element.self as? Array.Type {
return 1 + nestedArrayType.numberOfDims
} else {
return 1
}
}
}
[[1,2],[2],[3]].numberOfDims
The code above compiles — quite confusingly — because Swift takes Array.Type to be a shortcut for Array<Element>.Type. That completely defeats the attempt to unwrap.
What’s the workaround? There isn’t one. This approach can’t work because we need to say “if Element is some kind of Array,” but as far as I know, there’s no way in Swift to say “array of anything,” or “just the Array type regardless of Element.”
Everywhere you mention the Array type, its generic type parameter must be materialized to a concrete type or a protocol at compile time.
Cheating can work
What about reflection, then? There is a way. Not a nice way, but there is a way. Swift’s Mirror is currently not powerful enough to tell us what the element type is, but there is another reflection method that is powerful enough: converting the type to a string.
private let arrayPat = try! NSRegularExpression(pattern: "Array<", options: [])
extension Array {
var numberOfDims: Int {
let typeName = "\(self.dynamicType)"
return arrayPat.numberOfMatchesInString(
typeName, options: [], range: NSMakeRange(0, typeName.characters.count))
}
}
Horrid, evil, brittle, probably not legal in all countries — but it works!
Unfortunately I was not able to do this with a Swift array but you can easily convert a swift array to an NSArray.
extension NSArray {
func numberOfDims() -> Int {
var count = 0
if let x = self.firstObject as? NSArray {
count += x.numberOfDims() + 1
} else {
return 1
}
return count
}
}

iOS Swift Error: 'T' is not convertible to 'MirrorDisposition'

I am attempting to create an extension method for the Array type to allow for removing an item from an array
extension Array {
func remove(item: AnyObject) {
for var i = self.count - 1; i >= 0; i-- {
if self[i] == item {
self.removeAtIndex(i)
}
}
}
}
On the test condition if self[i] == item, I get the following error: 'T' is not convertible to 'MirrorDisposition'
I've tried many different things, which include:
Using generics: remove<T>(item: T)
Using the === operator, which just gives the error 'T' does not conform to protocol 'AnyObject'
I'm new to Swift, so this is where my knowledge runs out. Any suggestions would be greatly appreciated.
You are getting an error because the compiler can't guarantee that the element stored in your array can be compared with ==. You have to ensure that it the contained type is Equatable. However, there is no way to add a method to a generic class that is more restrictive than the class itself. It is better to implement it as a function:
func removeItem<T: Equatable>(item: T, var fromArray array: [T]) -> [T] {
for i in reverse(0 ..< array.count) {
if array[i] == item {
array.removeAtIndex(i)
}
}
return array
}
Or you could add it as a more generic extension:
extension Array {
mutating func removeObjectsPassingTest(test: (object: T) -> Bool) {
for var i : Int = self.count - 1; i >= 0; --i {
if test(object: self[i]) {
self.removeAtIndex(i)
}
}
}
}
Then you can do this:
var list: [Int] = [1,2,3,2,1]
list.removeObjectsPassingTest({$0 == 2})
The way I would write this function is like this:
mutating func remove<U where U : Equatable>(item: U) {
for var i = self.count - 1; i >= 0; i-- {
if self[i] as U == item {
self.removeAtIndex(i)
}
}
}
Be sure to decorate your function with mutating.
I would use a different type parameter U since you can't really change Array's type parameter to be Equatable. Then I would try to cast the items to U to do the comparison.
Of course, this will fail if you try to call this function with an Array that is instantiated with a non-equatable type.
This is not a solution but if you are trying to remove an item from an array, this is how I do it:
var numbers = [1, 2, 3, 4, 5]
if let possibleIndex = find(numbers, 1) {
numbers.removeAtIndex(possibleIndex)
}
The error message is confusing. The problem why it does not work is because Swift compiler can not find == operator for Array's element type T. For this to work T would need to conform to Equatable protocol.
I don't know what is MirrorDispsition, but I think the problem is that you can't always equate two objects in Array, because they are not guaranteed to be equatable.
Edit: Look at tng's solution. It will only work with equatable items, though.