How does `NSAttibutedString` equate attribute values of type `Any`? - swift

The enumerateAttribute(_:in:options:using:) method of NSAttributedString appears to equate arbitrary instances of type Any. Of course, Any does not conform to Equatable, so that should not be possible.
Question: How does the method compare one instance of Any to another?
Context: In Swift, I am subclassing NSTextStorage, and have need to provide my own implementation of this method in Swift.
Observations:
NSAttributedString attributes come in key-value pairs, with the keys being instances of type NSAttributedString.Key and the values being instances of type Any?, with each pair being associated with one or more ranges of characters in the string. At least, that is how the data structure appears from the outside; the internal implementation is opaque.
The enumerateAttribute method walks through the entire range of an NSAttributedString, effectively identifying each different value corresponding to a specified key, and with the ranges over which that value applies.
The values corresponding to a given key could be of multiple different types.
NSAttributedString seemingly has no way of knowing what underlying types the Any values might be, and thus seemingly no way of type casting in order to make a comparison of two given Any values.
Yet, the method somehow is differentiating among ranges of the string based on differences in the Any values.
Interestingly, the method is able to differentiate between values even when the underlying type does not conform to Equatable. I take this to be a clue that the method may be using some sort of reflection to perform the comparison.
Even more interesting, the method goes so far as to differentiate between values when the underlying type does conform to Equatable and the difference between two values is a difference that the specific implementation of Equatable intentionally ignores. In other words, even if a == b returns true, if there is a difference in opaque properties of a and b that are ignored by ==, the method will treat the values as being different, not the same.
I assume the method bridges to an implementation in ObjC.
Is the answer: It cannot be done in Swift?

As you know, Cocoa is Objective-C, so these are Objective-C NSDictionary objects, not Swift Dictionary objects. So equality comparison between them uses Objective-C isEqual, not Swift ==. We are not bound by Swift strict typing, the Equatable protocol, or anything else from Swift.
To illustrate, here's a slow and stupid but effective implementation of style run detection:
let s = NSMutableAttributedString(
string: "howdy", attributes: [.foregroundColor:UIColor.red])
s.addAttributes([.foregroundColor:UIColor.blue],
range: NSRange(location: 2, length: 1))
var lastatt = s.attributes(at: 0, effectiveRange: nil)
for ix in 1..<5 {
let newatt = s.attributes(at:ix, effectiveRange:nil)
if !(newatt as NSDictionary).isEqual(to: lastatt) {
print("style run ended at \(ix)")
lastatt = newatt
}
}
That correctly prints:
style run ended at 2
style run ended at 3
So since it is always possible to compare the attributes at any index with the attributes at another, it is possible to implement attribute enumeration in Swift. (Whether that's a good idea is another question.)

Related

What is the use of hashable protocol in swift4?

please explain the use of hashable protocol with implementation in swift.
Apple defines hashable as “a type that provides an integer a hash value.” Okay, but what’s a hash value?
To make an object conform to Hashable we need to provide a hashValue property that will return a unique, consistent number for each instance.
The Hashable protocol inherits from Equatable, so you may also need to implement an == function.
Note: If two objects compare as equal using == they should also generate the same hash value, but the reverse isn’t true – hash collisions can happen.
Before Swift 4.1, conforming to Hashable was complex because you needed to calculate a hashValue property by hand.
In Swift 4.1 this improved so that hashValue could be synthesized on your behalf if all the properties conform to Hashable .
Swift 4.2 introduces a new Hasher struct that provides a randomly seeded, universal hash function to make all our lives easier. Refer for more
Quick answer:
We use hash integer into object to be able to quickly identify object that are equals by getting the object instance in front of the index that we are looking for.
Not quick answer:
When you are dealing with list in order to find an object you need to iterate over all your array and compare property to find the one you are looking for, this can slowdown your app as the list get bigger.
When you use SET, the mechanism under the hood use hash indexes to find an object so it take you only the time to calculate once the index you are looking for then you can access straight forward to your object, THIS IS SO COOL ISN'T.
In order to use SET the object need to conform to Hashable protocol since Swift 4.1, if your class or struct and all properties conform to Hashable then the conformity with Hashable and Equatable protocol is automatically done for you under the hood.
If you do not meet those requirments then you will have to make sure that you conform to Equatable and Hashable protocol.
Equatable protocol need to overide static func ==(..) in order to compare you object.
Hashable protocol need to provide as far as you can a unique integer value hashValue which need to be the same within two object when they are equals.
Hope this help
If an object conforms to the hashable protocol, it needs to have a hashValue, as you mentioned. The hashValue can be used to compare objects / uniquely identify the object.
You can compare objects in two ways:
=== function. This checks object references (can only be used with classes). It checks if the left object has the same reference to the right object. Even if both objects have exactly the same property values BUT they do have a different reference, it returns false.
== function (Equatable protocol). It checks if the objects are equal to eachother based on the static func ==. You can return the hashValue of the object. In that way, you can say objects are equal to eachtoher based on the properties, rather than reference.
If you provide your own hashValue, you can say objects are equal to eachother in a way you say objects are equal to eachother, regardless of the reference to the object. You can use objects in a Set that conform to the hashable protocol, because a Set checks if objects are equal to eachother based on the hashValue.
The Hashable documentation give one concrete example of what it's for:
You can use any type that conforms to the Hashable protocol in a set or as a dictionary key.
You can think of a hash value as a quick approximation of equality. Two elements that are equal will have the same hash value but two elements with the same hash value are not guaranteed to actually be equal.

Are Int, String etc. considered to be 'primitives' in Swift?

Types representing numbers, characters and strings implemented using structures in swift.
An excerpt from the official documentation:
Data types that are normally considered basic or primitive in other
languages—such as types that represent numbers, characters, and
strings—are actually named types, defined and implemented in the Swift
standard library using structures.
Does that mean the following:
Int
Float
String
// etc
... are not considered as primitives?
Yes and no...
As other answers have noted, in Swift there's no difference at the language level between the things one thinks of as "primitives" in other languages and the other struct types in the standard library or the value types you can create yourself. For example, it's not like Java, where there's a big difference between int and Integer and it's not possible to create your own types that behave semantically like the former. In Swift, all types are "non-primitive" or "user-level": the language features that define the syntax and semantics of, say, Int are no different from those defining CGRect or UIScrollView or your own types.
However, there is still a distinction. A CPU has native instructions for tasks like adding integers, multiplying floats, and even taking vector cross products, but not those like insetting rects or searching lists. One of the things people talk about when they name some of a language's types "primitively" is that those are the types for which the compiler provides hooks into the underlying CPU architecture, so that the things you do with those types map directly to basic CPU instructions. (That is, so operations like "add two integers" don't get bogged down in object lookups and function calls.)
Swift still has that distinction — certain standard library types like Int and Float are special in that they map to basic CPU operations. (And in Swift, the compiler doesn't offer any other means to directly access those operations.)
The difference with many other languages is that for Swift, the distinction between "primitive" types and otherwise is an implementation detail of the standard library, not a "feature" of the language.
Just to ramble on this subject some more...
When people talk about strings being a "primitive" type in many languages, that's a different meaning of the word — strings are a level of abstraction further away from the CPU than integers and floats.
Strings being "primitive" in other languages usually means something like it does in C or Java: The compiler has a special case where putting something in quotes results in some data getting built into the program binary, and the place in the code where you wrote that getting a pointer to that data, possibly wrapped in some sort of object interface so you can do useful text processing with it. (That is, a string literal.) Maybe the compiler also has special cases so that you can have handy shortcuts for some of those text processing procedures, like + for concatenation.
In Swift, String is "primitive" in that it's the standard string type used by all text-related functions in the standard library. But there's no compiler magic keeping you from making your own string types that can be created with literals or handled with operators. So again, there's much less difference between "primitives" and user types in Swift.
Swift does not have primitive types.
In all programming languages, we have basic types that are available part of the language. In swift, we have these types availed through the Swift standard library, created using structures. These include the types for numbers, characters and strings.
No, they're not primitives in the sense other languages define primitives. But they behave just like primitives, in the sense that they're passed by value. This is consequence from the fact that they're internally implemented by structs. Consider:
import Foundation
struct ByValueType {
var x: Int = 0;
}
class ByReferenceType {
var x: Int = 0;
}
var str: String = "no value";
var byRef: ByReferenceType = ByReferenceType();
var byVal: ByValueType = ByValueType();
func foo(var type: ByValueType) {
type.x = 10;
}
func foo(var type: ByReferenceType) {
type.x = 10;
}
func foo(var type: String) {
type = "foo was here";
}
foo(byRef);
foo(byVal);
foo(str);
print(byRef.x);
print(byVal.x);
print(str);
The output is
10
0
no value
Just like Ruby, Swift does not have primitive types.
Int per example is implemented as structand conforms with the protocols:
BitwiseOperationsType
CVarArgType
Comparable
CustomStringConvertible
Equatable
Hashable
MirrorPathType
RandomAccessIndexType
SignedIntegerType
SignedNumberType
You can check the source code of Bool.swift where Bool is implemented.
There are no primitives in Swift.
However, there is a distinction between "value types" and "reference types". Which doesn't quite fit with either C++ or Java use.
They are partial primitive.
Swift not exposing the primitive like other language(Java). But Int, Int16, Float, Double are defined using structure which behaves like derived primitives data type(Structure) not an object pointer.
Swift Documentation favouring to primitive feature :
1.They have Hashable extension say
Axiom: x == y implies x.hashValue == y.hashValue.
and hashValue is current assigned value.
2.Most of the init method are used for type casting and overflow check (remember swift is type safe).
Int.init(_ other: Float)
3. See the below code Int have default value 0 with optional type. While NSNumber print nil with optional type variable.
var myInteger:Int? = Int.init()
print(myInteger) //Optional(0)
var myNumber:NSNumber? = NSNumber.init()
print(myNumber) //nil
Swift Documentation not favouring to primitive feature :
As per doc, There are two types: named types and compound types. No concept of primitives types.
Int support extension
extension Int : Hashable {}
All these types available through the Swift standard library not like standard keyword.

How to constrain function parameter's protocol's associated types

For fun, I am attempting to extend the Dictionary class to replicate Python's Counter class. I am trying to implement init, taking a CollectionType as the sole argument. However, Swift does not allow this because of CollectionType's associated types. So, I am trying to write code like this:
import Foundation
// Must constrain extension with a protocol, not a class or struct
protocol SingletonIntProtocol { }
extension Int: SingletonIntProtocol { }
extension Dictionary where Value: SingletonIntProtocol { // i.e. Value == Int
init(from sequence: SequenceType where sequence.Generator.Element == Key) {
// Initialize
}
}
However, Swift does not allow this syntax in the parameter list. Is there a way to write init so that it can take any type conforming to CollectionType whose values are of type Key (the name of the type used in the generic Dictionary<Key: Hashable, Value>)? Preferably I would not be forced to write init(from sequence: [Key]), so that I could take any CollectionType (such as a CharacterView, say).
You just have a syntax problem. Your basic idea seems fine. The correct syntax is:
init<Seq: SequenceType where Seq.Generator.Element == Key>(from sequence: Seq) {
The rest of this answer just explains why the syntax is this way. You don't really need to read the rest if the first part satisfies you.
The subtle difference is that you were trying to treat SequenceType where sequence.Generator.Element == Key as a type. It's not a type; it's a type constraint. What the correct syntax means is:
There is a type Seq such that Seq.Generator.Element == Key, and sequence must be of that type.
While that may seem to be the same thing, the difference is that Seq is one specific type at any given time. It isn't "any type that follows this rule." It's actually one specific type. Every time you call init with some type (say [Key]), Swift will create an entirely new init method in which Seq is replaced with [Key]. (In reality, Swift can sometimes optimize that extra method away, but in principle it exists.) That's the key point in understanding generic syntax.
Or you can just memorize where the angle-brackets go, let the compiler remind you when you mess it up, and call it day. Most people do fine without learning the type theory that underpins it.

Recursive Enumerations in Swift

I'm learning Swift 2 (and C, but also not for long) for not too long and I came to a point where I struggle a lot with recursive enumerations.
It seems that I need to put indirect before the enum if it is recursive. Then I have the first case which has Int between the parentheses because later in the switch it returns an Integer, is that right?
Now comes the first problem with the second case Addition. There I have to put ArithmeticExpression between the parentheses. I tried putting Int there but it gave me an error that is has to be an ArithmeticExpression instead of an Int. My question is why? I can't imagine anything what that is about. Why can't I just put two Ints there?
The next problem is about ArithmeticExpression again. In the func solution it goes in an value called expression which is of the type ArithmeticExpression, is that correct? The rest is, at least for now, completely clear. If anyone could explain that to me in an easy way, that'd be great.
Here is the full code:
indirect enum ArithmeticExpression {
case Number(Int)
case Addition(ArithmeticExpression, ArithmeticExpression)
}
func solution(expression: ArithmeticExpression) -> Int {
switch expression {
case .Number(let value1):
return value1;
case . Addition(let value1, let value2):
return solution(value1)+solution(value2);
}
}
var ten = ArithmeticExpression.Number(10);
var twenty = ArithmeticExpression.Number(20);
var sum = ArithmeticExpression.Addition(ten, twenty);
var endSolution = solution(sum);
print(endSolution);
PeterPan, I sometimes think that examples that are TOO realistic confuse more than help as it’s easy to get bogged down in trying to understand the example code.
A recursive enum is just an enum with associated values that are cases of the enum's own type. That's it. Just an enum with cases that can be set to associated values of the same type as the enum. #endof
Why is this a problem? And why the key word "indirect" instead of say "recursive"? Why the need for any keyword at all?
Enums are "supposed" to be copied by value which means they should have case associated values that are of predictable size - made up of cases with the basic types like Integer and so on. The compiler can then guess the MAXIMUM possible size of a regular enum by the types of the raw or associated values with which it could be instantiated. After all you get an enum with only one of the cases selected - so whatever is the biggest option of the associated value types in the cases, that's the biggest size that enum type could get on initialisation. The compiler can then set aside that amount of memory on the stack and know that any initialisation or re-assignment of that enum instance could never be bigger than that. If the user sets the enum to a case with a small size associated value it is OK, and also if the user sets it to a case with the biggest associated value type.
However as soon as you define an enum which has a mixture of cases with different sized associated types, including values that are also enums of the same type (and so could themselves be initialised with any of the enums cases) it becomes impossible to guess the maximum size of the enum instance. The user could keep initialising with a case that allows an associated value that is the same type as the enum - itself initialised with a case that is also the same type, and so on and so on: an endless recursion or tree of possibilities. This recursion of enums pointing to enums will continue until an enum is initialised with associated value of "simple" type that does not point to another enum. Think of a simple Integer type that would “terminate” the chain of enums.
So the compiler cannot set aside the correct sized chunk of memory on the stack for this type of enum. Instead it treats the case associated values as POINTERS to the heap memory where the associated value is stored. That enum can itself point to another enum and so on. That is why the keyword is "indirect" - the associated value is referenced indirectly via a pointer and not directly by a value.
It is similar to passing an inout parameter to a function - instead of the compiler copying the value into the function, it passes a pointer to reference the original object in the heap memory.
So that's all there is to it. An enum that cannot easily have its maximum size guessed at because it can be initialised with enums of the same type and unpredictable sizes in chains of unpredictable lengths.
As the various examples illustrate, a typical use for such an enum is where you want to build-up trees of values like a formula with nested calculations within parentheses, or an ancestry tree with nodes and branches all captured in one enum on initialisation. The compiler copes with all this by using pointers to reference the associated value for the enum instead of a fixed chunk of memory on the stack.
So basically - if you can think of a situation in your code where you want to have chains of enums pointing to each other, with various options for associated values - then you will use, and understand, a recursive enum!
The reason the Addition case takes two ArithmeticExpressions instead of two Ints is so that it could handle recursive situations like this:
ArithmeticExpression.Addition(ArithmeticExpression.Addition(ArithmeticExpression.Number(1), ArithmeticExpression.Number(2)), ArithmeticExpression.Number(3))
or, on more than one line:
let addition1 = ArithmeticExpression.Addition(ArithmeticExpression.Number(1), ArithmeticExpression.Number(2))
let addition2 = ArithmeticExpression.Addition(addition1, ArithmeticExpression.Number(3))
which represents:
(1 + 2) + 3
The recursive definition allows you to add not just numbers, but also other arithmetic expressions. That's where the power of this enum lies: it can express multiple nested addition operations.

any is not identical to anyObject

I've a payload getting shuttled from one part of the system to the other.
The shuttle is carrying the payload as Any, so I could carry any kind of objects including non objects like tuples, etc.
one of the parts of the system is accepting AnyObject so is the error.
I'm confused like what type to use to carry stuff around so it's compatible between all parts of the system.
Shall I make a choice and stick to one of the types, either Any or AnyObject for the system as a whole or what's the best choice for shuttling items if you are not concerned with their actual types.
we had type Object in other languages that could carry anything around, but not sure how this works in SWIFT world
or is there a casting that could work between the two? If I'm 100% convinced that the coming object is AnyObject, I could load it off from the shuttle (Any) as an AnyObject
Note to negative voters: Please help to clear up the question if it doesn't make any sense to you or help to improve this question, since I'm new to SWIFT. I need an answer not your vote.
Edit
a case where I had to do comparison between Any and AnyObject while unit testing, how would you handle such situation.
class Test {
var name: String = "test"
}
var anyObject: AnyObject = Test()
var any: Any = anyObject
//XCTAssert(any == anyObject, "Expecting them to be equal")
any == anyObject
Any will hold any kind of type, including structs and enums as well as classes. AnyObject will only hold classes. So what Any can store is a superset of what AnyObject can. No amount of casting will cram your custom structs or enums into an AnyObject.
Sometimes it seems like AnyObject is holding a struct such as a String, but it isn’t, what has happened is somewhere along the way Swift has converted your String to an NSString (which is a class so can be stored in an AnyObject).
(technically, Any is defined as something that implements 0 or more protocols, which anything does, whereas AnyObject is defined as a special protocol all classes implicitly conform to, and that is marked as an #objc protocol, so only classes can conform to it)
edit: to answer your question about comparisons – there’s no == operator for Any or AnyObject (how would it work if you equated an Any containing a String to an Any containing an Int?). You have to cast both sides back into what they really are before you can compare them using an appropriately-defined operator for that type.