Swift: Overriding Self-requirement is allowed, but causes runtime error. Why? - swift

I just started to learn Swift (v. 2.x) because I'm curious how the new features play out, especially the protocols with Self-requirements.
The following example is going to compile just fine, but causes arbitrary runtime effects to happen:
// The protocol with Self requirement
protocol Narcissistic {
func getFriend() -> Self
}
// Base class that adopts the protocol
class Mario : Narcissistic {
func getFriend() -> Self {
print("Mario.getFriend()")
return self;
}
}
// Intermediate class that eliminates the
// Self requirement by specifying an explicit type
// (Why does the compiler allow this?)
class SuperMario : Mario {
override func getFriend() -> SuperMario {
print("SuperMario.getFriend()")
return SuperMario();
}
}
// Most specific class that defines a field whose
// (polymorphic) access will cause the world to explode
class FireFlowerMario : SuperMario {
let fireballCount = 42
func throwFireballs() {
print("Throwing " + String(fireballCount) + " fireballs!")
}
}
// Global generic function restricted to the protocol
func queryFriend<T : Narcissistic>(narcissistic: T) -> T {
return narcissistic.getFriend()
}
// Sample client code
// Instantiate the most specific class
let m = FireFlowerMario()
// The call to the generic function is verified to return
// the same type that went in -- 'FireFlowerMario' in this case.
// But in reality, the method returns a 'SuperMario' and the
// call to 'throwFireballs' will cause arbitrary
// things to happen at runtime.
queryFriend(m).throwFireballs()
You can see the example in action on the IBM Swift Sandbox here.
In my browser, the output is as follows:
SuperMario.getFriend()
Throwing 32 fireballs!
(instead of 42! Or rather, 'instead of a runtime exception', as this method is not even defined on the object it is called on.)
Is this a proof that Swift is currently not type-safe?
EDIT #1:
Unpredictable behavior like this has to be unacceptable.
The true question is, what exact meaning the keyword Self (capital first letter) has.
I couldn't find anything online, but there are at least these two possibilities:
Self is simply a syntactic shortcut for the full class name it appears in, and it could be substituted with the latter without any change in meaning. But then, it cannot have the same meaning as when it appears inside a protocol definition.
Self is a sort of generic/associated type (in both protocols and classes) that gets re-instantiated in deriving/adopting classes. If that is the case, the compiler should have refused the override of getFriend in SuperMario.
Maybe the true definition is neither of those. Would be great if someone with more experience with the language could shed some light on the topic.

Yes, there seems to be a contradiction. The Self keyword, when used as a return type, apparently means 'self as an instance of Self'. For example, given this protocol
protocol ReturnsReceived {
/// Returns other.
func doReturn(other: Self) -> Self
}
we can't implement it as follows
class Return: ReturnsReceived {
func doReturn(other: Return) -> Self {
return other // Error
}
}
because we get a compiler error ("Cannot convert return expression of type 'Return' to return type 'Self'"), which disappears if we violate doReturn()'s contract and return self instead of other. And we can't write
class Return: ReturnsReceived {
func doReturn(other: Return) -> Return { // Error
return other
}
}
because this is only allowed in a final class, even if Swift supports covariant return types. (The following actually compiles.)
final class Return: ReturnsReceived {
func doReturn(other: Return) -> Return {
return other
}
}
On the other hand, as you pointed out, a subclass of Return can 'override' the Self requirement and merrily honor the contract of ReturnsReceived, as if Self were a simple placeholder for the conforming class' name.
class SubReturn: Return {
override func doReturn(other: Return) -> SubReturn {
// Of course this crashes if other is not a
// SubReturn instance, but let's ignore this
// problem for now.
return other as! SubReturn
}
}
I could be wrong, but I think that:
if Self as a return type really means 'self as an instance of
Self', the compiler should not accept this kind of Self requirement
overriding, because it makes it possible to return instances which
are not self; otherwise,
if Self as a return type must be simply a placeholder with no further implications, then in our example the compiler should already allow overriding the Self requirement in the Return class.
That said, and here any choice about the precise semantics of Self is not bound to change things, your code illustrates one of those cases where the compiler can easily be fooled, and the best it can do is generate code to defer checks to run-time. In this case, the checks that should be delegated to the runtime have to do with casting, and in my opinion one interesting aspect revealed by your examples is that at a particular spot Swift seems not to delegate anything, hence the inevitable crash is more dramatic than it ought to be.
Swift is able to check casts at run-time. Let's consider the following code.
let sm = SuperMario()
let ffm = sm as! FireFlowerMario
ffm.throwFireballs()
Here we create a SuperMario and downcast it to FireFlowerMario. These two classes are not unrelated, and we are assuring the compiler (as!) that we know what we are doing, so the compiler leaves it as it is and compiles the second and third lines without a hitch. However, the program fails at run-time, complaining that it
Could not cast value of type
'SomeModule.SuperMario' (0x...) to
'SomeModule.FireFlowerMario' (0x...).
when trying the cast in the second line. This is not wrong or surprising behaviour. Java, for example, would do exactly the same: compile the code, and fail at run-time with a ClassCastException. The important thing is that the application reliably crashes at run-time.
Your code is a more elaborate way to fool the compiler, but it boils down to the same problem: there is a SuperMario instead of a FireFlowerMario. The difference is that in your case we don't get a gentle "could not cast" message but, in a real Xcode project, an abrupt and terrific error when calling throwFireballs().
In the same situation, Java fails (at run-time) with the same error we saw above (a ClassCastException), which means it attempts a cast (to FireFlowerMario) before calling throwFireballs() on the object returned by queryFriend(). The presence of an explicit checkcast instruction in the bytecode easily confirms this.
Swift on the contrary, as far as I can see at the moment, does not try any cast before the call (no casting routine is called in the compiled code), so a horrible, uncaught error is the only possible outcome. If, instead, your code produced a run-time "could not cast" error message, or something as gracious as that, I would be completely satisfied with the behaviour of the language.

The issue here seems to be a violation in contract:
You define getFriend() to return an instance of receiver (Self). The problem here is that SuperMario does not return self but it returns a new instance of type SuperMario.
Now, when FireFlowerMario inherits that method the contract says that the method should return a FireFlowerMario but instead, the inherited method returns a SuperMario! This instance is then treated as if it were a FireFlowerMario, specifically: Swift tries to access the instance variable fireballCount which does not exist on SuperMario and you get garbage instead.
You can fix it like this:
class SuperMario : Mario {
required override init() {
super.init()
}
override func getFriend() -> SuperMario {
print("SuperMario.getFriend()")
// Dynamically create new instance of the same type as the receiver.
let myClass = self.dynamicType
return myClass.init()
}
}
Why does the compiler allow it? It has a hard time catching something like this, I guess. For SuperMario, the contract is still valid: the method getFriend does return an instance of the same class. The contract breaks when you create the subclass FireFlowerMario: should the compiler notice that a superclass might violate the contract? This would be an expensive check and probably more suited for a static analyzer, IMHO (Also, what happens if the compiler doesn't have access to SuperMario's source? What happens if that class is from a library?)
So it's actually SuperMario's duty to ensure that the contract is still valid when subclassing.

Related

Swift Passing a Generic from a Function

I have the following function :
Amplify.API.query(request: .get(
self.getObjectForOCObject(ocObject: ocObject)!, byId: id)) { event in
The definition of the get function is :
public static func get<M: Model>(_ modelType: M.Type,
byId id: String) -> GraphQLRequest<M?> {
My getObjectForObject is as follows :
func getObjectForOCObject <M: Model>(ocObject:NSObject) -> M.Type?
{
if type (of:ocObject) == type(of: OCAccount.self)
{
return Account.self as? M.Type
}
return nil;
}
But I'm getting a"Generic Parameter 'M' could not be inferred. I get that the compiler cannot determine what type of object this is. However I don't know how to fix this. I need my getObjectForOCObject function to return what it is expecting. Essentially I'm trying to return the type of object based on the type of another object.
Normally this call is done this way and it works...
Amplify.API.query(request: .get(Account.self, byId: id)) { event in
I'm trying to avoid having to create a number of functions to handle each different Model in my code as that is being handled in higher up in the call hierarchy. Which is why I'm trying to do this. Otherwise I'm going to have to write essentially the same code in multiple places. In Objective C one can use the super class to pass objects around but the inheriting class still comes along with it. But with Swift being dynamically typed, I'm running into this issue.
The closest I've gotten is this... Changing the getObjectForOCObject return to Model.Type? as so :
func getObjectForOCObject (ocObject:NSObject) -> Model.Type?
{
if type (of:ocObject) == type(of: OCAccount.self)
{
return Account.self
}
return nil;
}
Then I get a Cannot convert value of type 'Model.Type?' to expected argument type 'M.Type' But aren't these essentially the same thing based on the .get function above?
Hoping for some insight here. Also, FYI, I'm pretty terrible with Swift so I'm certain I'm definitely doing some bad things in Swift here. I'm an expert with ObjC. I'm just trying to create a bridge between the two for this Swift library I'm using.

Why does the protocol default value passed to the function not change, even though the function does when subclassing?

I have a protocol, to which I have assigned some default values:
protocol HigherProtocol {
var level: Int { get }
func doSomething()
}
extension HigherProtocol {
var level: Int { 10 }
func doSomething() {
print("Higher level is \(level)")
}
}
Then I have another protocol which conforms to the higher level protocol, but has different default values and implementation of functions:
protocol LowerProtocol: HigherProtocol {}
extension LowerProtocol {
var level: Int { 1 }
func doSomething() {
print("Lower level is \(level)")
}
}
I then create a class that conforms to the HigherProtocol, and then a subclass that conforms to the lower level protocol:
class HigherClass: HigherProtocol {}
class LowerClass: HigherClass, LowerProtocol {}
When I instantiate this lower class, however, it displays some odd behaviour:
let lowerClass = LowerClass()
lowerClass.level // is 1
lowerClass.doSomething() // Prints "Lower level is 10" to the console.
The default property is correct, but the default implementation of the function seems to be a hybrid of the two.
I'm wondering what's happening here?
You appear to be trying to use protocols to create multiple-inheritance. They're not designed for that, and even if you get this working, you're going to get bitten several times. Protocols are not a replacement for inheritance, multiple or otherwise. (As a rule, Swift favors composition rather than inheritance in any form.)
The problem here is that HigherClass conforms to HigherProtocol and so now has implementations for level and doSomething. LowerClass inherits from that, and wants to override those implementations. But the overrides are in a protocol extension, which is undefined behavior. See Extensions from The Swift Programming Language:
Extensions can add new functionality to a type, but they cannot override existing functionality.
Undefined behavior doesn't mean "it doesn't override." It means "anything could happen" including this weird case where it sometimes is overridden and sometimes isn't.
(As a side note, the situation is similar in Objective-C. Implementing a method in two different categories makes it undefined which one is called, and there's no warning or error to let you when this happens. Swift's optimizations can make the behavior even more surprising.)
I wish the compiler could detect these kinds of mistakes and raise an error, but it doesn't. You'll need to redesign your system to not do this.
Protocols are existential types that is why you are confused. You need to expose to protocol types of your class Type. In your case you can do LowerProtocol or HigherProtocol so it prints 10 now. Let`s make like this
let lowerClass: LowerProtocol = LowerClass()
or
let lowerClass: HigherProtocol = LowerClass()
lowerClass.level // now prints 10
lowerClass.doSomething() // Prints "Lower level is 10" to the console.

Initialize Non-Optional Property Before super.init call without duplicating code

One of the most frustrating situations for me in Swift is when I have a class like this:
class A: NSObject {
var a: Int
override init() {
clear()
super.init()
}
func clear() {
a = 5
}
}
Of course, this causes multiple compiler errors ('self' used in method call 'clear' before 'super.init' call and Property 'self.a' not initialized at super.init call.
And changing the order of the constructor lines only fixes the first error:
override init() {
super.init()
clear()
}
Usually, I end up doing one of two things. The first option is to make a an implicitly unwrapped optional, but I feel that this is terrible style because a will never be nil after the constructor returns:
var a: Int! = nil
The second option is to copy the functionality of my clear() function in the constructor:
override init() {
a = 5
super.init()
}
This works fine for this simplified example, but it unnecessarily duplicates a lot of code in more complex code bases.
Is there a way to initialize a without duplicating my clear function or making a an optional? If not, why not?!
The "why not" in this specific case is very straightforward. What you've written would allow me to write this:
class B: A {
override func clear() {}
}
And then a would not be initialized. So what you've written can never be legal Swift.
That said, there's a deeper version that probably could be legal but isn't. If you marked the class final or if this were a struct, then the compiler might be able to prove that everything is correctly initialized along all code paths by inlining all the possible method calls, but the compiler doesn't do all that today; it's too complicated. The compiler just says "my proof engine isn't that strong so knock it off."
IMO, the correct solution here is a ! type, though I wouldn't add = nil. That's misleading. I would write it this way:
class A: NSObject {
var a: Int! // You don't have to assign `!` types; they're automatically nil
override init() {
super.init()
clear()
}
func clear() {
a = 5
}
}
This says "I am taking responsibility to make sure that a is going to be set before it is used." Which is exactly what you are doing. I do this all the time when I need to pass self as a delegate. I wish the compiler could explore every possible code path across every method call, but it doesn't today (and given what it might do to compile times, maybe I don't wish that).
but I feel that this is terrible style because a will never be nil after the constructor returns
That's exactly the point of ! types. They should never be nil by the time any other object can get their hands on it. If it could be nil, then you shouldn't be using !. I agree that ! is dangerous (since the compiler is no longer helping you), but it's not bad style.
The only other reasonable approach IMO is to assign default values. I wouldn't use the actual values of course; that would an invitation to subtle bugs. I would just use some default value to get things in place.
class A: NSObject {
var a = Int.min // -1 might be reasonable. I just like it to be clearly wrong
override init() {
super.init()
clear()
}
func clear() {
a = 5
}
}
I don't love this approach, though I've used it in a few places. I don't think it gives you any safety over the ! (it won't crash, but you'll almost certainly give you subtle behavioral bugs, and I'd rather the crash). This is a place where the programmer must pay attention because the compiler is powerful enough to prove everything's correct, and the point of ! is to mark those places.

Difference between using Generic and Protocol as type parameters, what are the pros and cons of implement them in a function

Since Swift allows us using both Protocol and Generic as parameter types in a function, the scenario below has come into my mind:
protocol AProtocol {
var name: String{ get }
}
class ClassA: AProtocol {
var name = "Allen"
}
func printNameGeneric<T: AProtocol>(param: T) {
print(param.name)
}
func printNameProtocol(param: AProtocol) {
print(param.name)
}
The first function uses generic as parameter type with a type constraint, and the second function uses protocol as the parameter type directly. However, these two functions can have the same effect, which is the point confusing me. So my questions are:
What are the specific scenarios for each of them (or a case which can only be done by the specific one, but not another)?
For the given case, both functions turn out the same result. Which one is better to implement (or the pros and cons of each of them)?
This great talk has mentioned generic specialization, which is a optimization that turn the way of function dispatching from dynamic dispatching (function with non-generic parameters) to static dispatching or inlining (function with generic parameters). Since static dispatching and inlining are less expensive in contrast with dynamic dispatching, to implement functions with generic can always provide a better performance.
#Hamish also gave great information in this post, have a look for more information.
Here is a new question came to me:
struct StructA: AProtocol {
var a: Int
}
struct StructB: AProtocol {
var b: Int
}
func buttonClicked(sender: UIButton) {
var aVar: AProtocol
if sender == self.buttonA
{
aVar = StructA(a: 1)
}
else if sender == self.buttonA
{
aVar = StructB(b: 2)
}
foo(param: aVar)
}
func foo<T: AProtocol>(param: T) {
//do something
}
If there are several types conform to a Protocol, and are pass in to a generic function in different conditions dynamically. As shown above, pressing different buttons will pass different types(StructA or StructB) of parameter into function, would the generic specialization still work in this case?
There is actually a video from this year's WWDC about that (it was about performance of classes, structs and protocols; I don't have a link but you should be able to find it).
In your second function, where you pass a any value that conforms to that protocol, you are actually passing a container that has 24 bytes of storage for the passed value, and 16 bytes for type related information (to determine which methods to call, ergo dynamic dispatch). If the passed value is now bigger than 24 bytes in memory, the object will be allocated on the heap and the container stores a reference to that object! That is actually extremely time consuming and should certainly be avoided if possible.
In your first function, where you use a generic constraint, there is actually created another function by the compiler that explicitly performs the function's operations upon that type. (If you use this function with lots of different types, your code size may, however, increase significantly; see C++ code bloat for further reference.) However, the compiler can now statically dispatch the methods, inline the function if possible and does certainly not have to allocate any heap space. Stated in the video mentioned above, code size does not have to increase significantly as code can still be shared... so the function with generic constraint is certainly the way to go!
Now we have two protocol below:
protocol A {
func sometingA()
}
protocol B {
func sometingB()
}
Then the parameter need to conform to both A and B.
//Generic solution
func methodGeneric<T:A>(t:T)where T:B {
t.sometingA()
t.sometingB()
}
//we need protocol C to define the parameter type
protocol C:A,B {}
//Protocol solution
func methodProtocol(c:C){
c.sometingA()
c.sometingB()
}
It seems that nothing is wrong but when we define a struct like this:
struct S:A,B {
func sometingB() {
print("B")
}
func sometingA() {
print("A")
}
}
The methodGeneric works but we need to change struct S:A,B to struct S:C to make methodProtocol work. Some questions:
Do we really need protocol C?
Why not would we write like func method(s:S)?
You could read more about this in the Generic Doc for additional information .

Why is `required` not a valid keyword on class functions in Swift?

It seems that there are a few situations in which a required keyword on Swift class functions would be extremely beneficial, particularly due to the ability of class functions to return Self.
When returning Self from a class func, there are unfortunately two restrictions that make implementing said function very difficult/inhibitive:
You cannot use Self as a type check inside the function implementation, ie:
class Donut {
class func gimmeOne() -> Self {
// Compiler error, 'Self' is only available in a protocol or as the result of a class method
return Donut() as Self
}
}
You cannot return the actual type of the class itself, ie:
class Donut {
class func gimmeOne() -> Self {
// Compiler error, 'Donut' is not convertible to 'Self'
return Donut()
}
}
The reason for these compiler errors is valid. If you have a GlazedDonut subclass that does not override this class function, it is possible that calling GlazedDonut.gimmeOne() will give you back a Donut, which is not a Self.
It seems this situation could be alleviated by allowing classes to specify these functions with required. This would ensure that any subclasses override the method and encur their own round of type checking, making sure that a GlazedDonut returns itself in all cases, eliminating the possibility for a Donut to come back.
Is there a technical, authoritative reason why this has not been added? I'd like to suggest it as an improvement to the Swift team, but want to ensure there isn't an obvious reason why it has been omitted, or cannot be accomplished.
The idea for this question originates here:
https://stackoverflow.com/a/25924224/88111
required is generally only used on initializers, because initializers are not always inherited in Swift. Therefore, to allow you to call an initializer on a variable class (i.e. a value of metaclass type, say Foo.Type), you need to know that that class Foo, and all possible subclasses, have this initializer.
However, methods (both instance methods and class methods) are always inherited. Therefore, required is not necessary.
By the way, your assertion that "You cannot return the actual type of the class itself" is not true. (In fact, the error "'Self' is only available in a protocol or as the result of a class method" itself says you can return the type of the class itself.) Similar to in Objective-C, you can do:
class Donut {
required init() { }
class func gimmeOne() -> Self {
return self()
}
}
You could use a protocol to make the method 'required'
protocol IDonut{
class func gimmeOne()->Donut;
}
class Donut:IDonut {
class func gimmeOne() -> Donut {
return Donut();
}
}
class GlazedDonut: Donut, IDonut{
override class func gimmeOne()->Donut{
return GlazedDonut();
}
}